text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Django Conditional Expressions are added in Django 1.8. By using Conditional Expressions we can use "If...Elif...Else" expressions while querying the database. Conditional expressions executes series of conditions while querying the database, It checks the condition for every record of the table in database and returns the matching results. Conditional expressions can be nested and also can be combined. The following are the Conditional Expressions in Django and Consider the below model for sample queries class Employee(models.Model): ACCOUNT_TYPE_CHOICES = ( ("REGULAR", 'Regular'), ("GOLD", 'Gold'), ("PLATINUM", 'Platinum'), ) name = models.CharField(max_length=50) joined_on = models.DateField() salary = models.DecimalField() account_type = models.CharField( max_length=10, choices=ACCOUNT_TYPE_CHOICES, default="REGULAR", ) 1. WHEN A When() object is used as a condition inside the query from django.db.models import When, F, Q >>> When(field_name1_on__gt=date(2014, 1, 1),>> When(Q(name__startswith="John") | Q(name__startswith="Paul"), then="name") # we can also use nested lookups 2.CASE A Case() expression is like the if ... elif ... else statement in Python. It executes the conditions one by one until one of the given conditions are satisfied. If no conditions are satisfied then the default value is returned if it is provided otherwise "None" will be returned. from django.db.models import CharField, Case, Value, When >>>ModelName.objects.annotate( ... field=Case( ... When(field1="value1", then=Value('5%')), ... When(field1="value2", then=Value('10%')), ... default=Value('0%'), ... output_field=CharField(), ... ), ... ) If we want to update the account type to PLATINUM if employee has more than 3 years of experiance, to GOLD if employee has more than 2 years of experiance otherwise REGULAR For this we can write the code in two ways: case1: Better Query without using conditional Expressions >>> above_3yrs = date.today() - timedelta(days=365 * 3) >>> above_2yrs = date.today() - timedelta(days=365 * 2) >>> Employee.objects.filter(joined_on__lt=above_3yrs).update(account_type="PLATINUM") >>> Employee.objects.filter(joined_on__lt=above_2yrs).update(account_type="GOLD")Above code hits the database for two times to apply the change case2: Best Query using conditional Expressions >>> from datetime import date >>> above_3yrs = date.today() - timedelta(days=365 * 3) >>> above_2yrs = date.today() - timedelta(days=365 * 2) >>> Employee.objects.update( ... account_type=Case( ... When(joined_on__lte=above_3yrs, ... then=Value("PLATINUM")), ... When(joined_on__lte=above_2yrs, ... then=Value("GOLD")), ... default=Value("REGULAR") ... ), ... ) It hits the database only one time. So, We can reduce the no of queries to the database. By reducing the number of queries on the database, ultimately we can improve the efficiency and response time.
https://micropyramid.com/blog/django-conditional-expression-in-queries/
CC-MAIN-2017-30
en
refinedweb
For not work quite as it was described it would. As these quality problems emerge, the companies in this industry turn to 3rd parties, asking the 3rd parties to help their clients have a better experience. The companies incorporate IP from those third parties and many times, they ask the 3rd parties to service the products. While this often improves the customer experience, it does not solve the problem of the negative externalities (and the implications on the users) of the product. In many cases, the negative externalities drive up costs for the users. Then, along comes the threat of a new approach and technology. It simplifies many of the previous issues. The product looks better on the surface and clients tend to be happier, sooner. It also solves the problem of the negative externalities. It’s a homerun all around. Does everyone move to the new approach/technology en masse? Well, you tell me: do you own a battery-powered car? ————————————- The story above is merely an illustration that history repeats itself and there is a lot to be learned from understanding and spotting patterns. I suppose most people who read this will think of enterprise software, as they read that story. And, when I get to the part about the new approach/technology, they start thinking of SaaS and Cloud. However, the answer to the question is the same, whether we are talking autos or enterprise software: The world does not move en masse in any direction, even though benefits are apparent. I continue to see rhetoric that postulates that the future of enterprise software is simply cloud and SaaS. While its hard to argue this at a conceptual level (given its lack of specificity), I think it trivializes a very complex topic. Not everything will be cloud/SaaS, although those will certainly be two possible delivery models. To really form a view of how enterprise software evolves over the next 10-20 years, I’ve constructed some over-arching hypotheses, which hopefully provides a framework for thinking about new business opportunities in enterprise software. Hypothesis 1: The current model of ‘pushing’ your product through a salesforce does not scale and is not optimal for clients/users. Usability will dominate, and I extend usability to include topics like time-to-value, ease of use, and self-service. Hypothesis 2: The model of paying Systems Integrators to make your products work together (or work in the first place) will enter a secular decline. There will continue to be a strong consulting market for application development, high-end strategy/segmentation, and complex project management. However, clients will no longer tolerate having to pay money just to make things work. Hypothesis 3: Enterprises cannot acquire skills fast enough to exploit new technology. So, on one hand, usability needs to address this. On the other hand, continuing education will need to offer a new method for driving skills development quickly. Continuing education is much more than ‘product training’. In fact, while ‘product training’ is the majority that is paid for today…I believe it will be the minority going forward. Hypothesis 4: There will be different models for software delivery: Cloud, SaaS, On-premise, Outsourced, etc. Therefore, just because a company offers something in a certain model does not mean that they will be successful. Clients will buy the best fit model, based on their business goal and related concerns (security, sustainability, etc). Hypothesis 5: Clients will optimize easy (implementation and ongoing support) and return (on investment and capital). Products that deliver on both are a no-brainer. Products that only hit one of them will be scrutinized. Products that deliver neither, will cease to exist. As I meet with new companies and even assess products that we are building, this is my current framework for thinking through how to identify the potential winners and losers. Reference: A Framework for Enterprise Software from our JCG partner Rob Thomas at the Rob’s Blog blog.
https://www.javacodegeeks.com/2012/10/a-framework-for-enterprise-software.html
CC-MAIN-2017-30
en
refinedweb
.common;32 33 /**34 * Response constasnts35 *36 * @author David Czarnecki37 * @since blojsom 3.038 * @version $Id: ResponseConstants.java,v 1.1 2006/03/20 21:30:54 czarneckid Exp $39 */40 public class ResponseConstants {41 42 public static final String APPROVED_STATUS = "approved";43 public static final String NEW_STATUS = "new";44 public static final String SPAM_STATUS = "spam";45 }46 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/blojsom/plugin/common/ResponseConstants.java.htm
CC-MAIN-2017-30
en
refinedweb
Grant Edwards wrote: > On 2009-08-14, Erik Max Francis <max at alcyone.com> wrote: >> Grant Edwards wrote: >>> On 2009-08-14, Steven D'Aprano <steve at REMOVE-THIS-cybersource.com.au> wrote: >>>> What the hell >>>> would it actually do??? >>> IIRC in C++, >>> >>> cout << "Hello world"; >>> >>> is equivalent to this in C: >>> >>> printf("Hellow world"); >>> >>> or this in Python: >>> >>> print "hellow world" >> Well, plus or minus newlines. > > And a few miscellaneous typos... ... and includes and namespaces :-). -- Erik Max Francis && max at alcyone.com && San Jose, CA, USA && 37 18 N 121 57 W && AIM/Y!M/Skype erikmaxfrancis It's hard to say what I want my legacy to be when I'm long gone. -- Aaliyah
https://mail.python.org/pipermail/python-list/2009-August/547822.html
CC-MAIN-2017-30
en
refinedweb
At this point in our exploration of getting IPython to work on OpenShift we have deduced that we cannot, and should not, have our Docker container be dependent on running as the 'root' user. Simply setting up the Docker container to run as a specific non ‘root’ user wasn’t enough however. This is because in pursuit of a more secure environment, OpenShift actually uses a different user ID for each project when running Docker containers. As I keep noting, user namespaces when available in Docker, should be able to transparently hide any underlying mapping to a special user ID as required by an underlying platform, allowing the Docker container to use what ever user ID it wants. We aren’t there yet, and given that user namespaces were first talked about as coming soon well over a year ago, we could well be waiting some time yet for all the necessary pieces to fall into place to enable that. In the mean time, the best thing you can do to ensure Docker images are portable to different hosting environments, and be as secure as possible, is design your Docker containers to run as a non ‘root’ user, but at the same time be tolerant of running as an arbitrary user ID specified at the time the Docker container is started. File system access permissions In our prior post, where we got to was that when running our IPython Docker container as a random user ID, it would fail even when running some basics The problems basically boiled down to file system access permissions, this being caused by the fact that we were running as a different user ID to what we expected. The first specific problem was that the ‘HOME’ directory environment variable wasn’t set to what was expected for the user we anticipated everything to run as. This meant that instead of the home directory ‘/home/ipython’ being used, it was trying to use ‘/‘ as the home directory. As a first step, lets simply try overriding the ‘HOME’ directory and forcing it to be what we desired it to be by adding to the ‘Dockerfile': ENV HOME=/home/ipython Starting the Docker container with an interactive shell we now get: $ docker run --rm -it -u 100000 -p 8888:8888 jupyter-notebook bash I have no name!@e40f5e18f666:/notebooks$ whoami whoami: cannot find name for user ID 100000 I have no name!@e40f5e18f666:/notebooks$ id uid=100000 gid=0(root) I have no name!@e40f5e18f666:/notebooks$ pwd /notebooks I have no name!@e40f5e18f666:/notebooks$ env | grep HOME HOME=/home/ipython I have no name!@e40f5e18f666:/notebooks$ touch $HOME/magic touch: cannot touch ‘/home/ipython/magic’: Permission denied The ‘HOME’ directory environment variable is now correct, but we still cannot create files due to the fact that the home directory is owned by the ‘ipython’ user and we are running with a different user ID. $ ls -las $HOME total 24 4 drwxr-xr-x 3 ipython ipython 4096 Dec 22 21:53 . 4 drwxr-xr-x 4 root root 4096 Dec 22 21:53 .. 4 -rw-r--r-- 1 ipython ipython 220 Dec 22 21:53 .bash_logout 4 -rw-r--r-- 1 ipython ipython 3637 Dec 22 21:53 .bashrc 4 drwx------ 2 ipython ipython 4096 Dec 22 21:53 .jupyter 4 -rw-r--r-- 1 ipython ipython 675 Dec 22 21:53 .profile Using group access permissions The solution to file system access permission problems one often sees in Docker containers which try to run as a non ‘root’ user is to simply make files and directories world writable. That is, after setting up everything in the ‘Dockerfile’ as the ‘root’ user and before switching the user using a ‘USER’ statement, the ‘chmod’ command is run recursively on any directories and files which the running application might need to update. I personally don’t like this approach of making everything world writable at all. To me it falls into that category of bad practices you wouldn’t use if you were installing an application direct to a host when you aren’t using Docker, so why start now. But what are the alternatives? The more secure alternative that would normally be used to allow multiple users to update the same directories or files are UNIX groups. The big question is whether they are going to be useful in this case or not. As it is, when the home directory for the ‘ipython’ user was created, the directories and files were created with the group ‘ipython’, being a personal group created for the ‘ipython’ user when the ‘adduser’ command was used to create the user account. The problem with the use of a personal group as the primary group for the user and thus the directories and files created, is that it is impossible to know what the random user ID will be and so add it into the personal group in advance. Having the group of the directories and files be a personal group is therefore not going to work. The question now is if the group would normally be set to whatever the primary group is for a named user, what group is actually going to be used when the user ID is being overridden for the container at run time. Lets first look at the case of where we override the user ID but still use one which does have a user defined for it. $ docker run --rm -it -u 5 -p 8888:8888 jupyter-notebook bash games@d0e1f5776ccb:/notebooks$ id uid=5(games) gid=60(games) groups=60(games) Here we specify the user ID ‘5’, which corresponds to the ‘games’ user. That user happens to have a corresponding primary group which maps to its own personal group of ‘games’. In overriding the user ID, the primary group for the user is still picked and used as the effective group. Thus the ‘id’ command shows the ‘gid’ being ’60’, corresponding to the ‘games’ group. Do note that this is only the case where only the user ID was overridden. It so happens that the ‘-u’ option to ‘docker run’ can also be used to override the effective group used as well. $ docker run --rm -it -u 5:1 -p 8888:8888 jupyter-notebook bash games@58d9074c872c:/notebooks$ id uid=5(games) gid=1(daemon) groups=60(games) Here we have overridden the effective group to be group ID of ‘1’, corresponding to the ‘daemon’ group. Back to our random user ID, when we select a user ID which doesn’t have a corresponding user account we see: docker run --rm -it -u 10000 -p 8888:8888 jupyter-notebook bash I have no name!@f4050457c1ee:/notebooks$ id uid=10000 gid=0(root) That is, the effective group is set as the ‘gid’ of ‘0’, corresponding to the group for ‘root’. The end result is that provided that we do not override the effective group as well using the ‘-u’ option, if the user ID specified corresponds to a user account, then the primary group for that user would be used. If instead a random user ID were used for which there did not exist a corresponding user account, then the effective group would be that for the ‘gid’ of ‘0’, which is reserved for the ‘root’ user group. Note that in a hosting service which is effectively using a randomly assigned user ID, it is assumed that it will never select one which overlaps with an existing user ID. This can’t be completely guaranteed, although so long as a hosting service uses user IDs starting at a very large number, it is a good bet it will not clash with an existing user. For OpenShift at least, it appears to allocate user IDs starting somewhere above ‘1000000000’. As to overriding the group as well as the user ID, it is also assumed that a hosting service would not do that. Again, OpenShift at least doesn’t override the group and this is probably the most sensible thing that could be done here as overriding of the group to be some random ID as well, would make the use of UNIX groups inside of the container impossible as nothing would be predictable. In this case I would suggest any hosting service going down this path of allocating user IDs, follow OpenShift’s lead and not override the group ID as doing so would likely just cause a world of hurt. Using a user with effective GID of 0 What now is going to be the most workable solution if we wish to rely on group access permissions? In light of the above observed behaviour what seems might work is to have the special user we created, and which would be the default user specified by the ‘USER’ statement of the ‘Dockerfile', have a primary group with ‘gid’ of ‘0’. That is, we match what would be the primary group used if a random user ID had been used which does not correspond to a user account. By making such a choice for the effective group, it means that the group will be the same for both cases and we can now set up file system permissions correspondingly. Updating our ‘Dockerfile’ based on this, we end up with: RUN adduser --disabled-password --gid 0 --gecos "IPython" ipython RUN mkdir -m 0775 /notebooks && chown ipython:root /notebooks VOLUME /notebooks WORKDIR /notebooks USER ipython # Add a notebook profile.RUN mkdir -p -m 0775 ~ipython/.jupyter/ && \ echo "c.NotebookApp.ip = '*'" >> ~ipython/.jupyter/jupyter_notebook_config.py RUN chmod -R u+w,g+w /home/ipython ENV HOME=/home/ipython The key changes are: - Add the ‘--gid 0’option to ‘adduser’ so that the primary group for user is ‘root’. - Create the ‘/notebooks’ directory with mode ‘0775’ so writable by group. - Move creation of ‘jupyter_notebook_config.py’ down to where we are the non ‘root’ user. - Change permissions on all files and directories in home directory so writable by group. Lets now check what happens for each of the use cases we expect. For the case where the Docker container runs as the default user as specified by the ‘USER’ statement we now get: $ docker run --rm -it -p 8888:8888 jupyter-notebook bash ipython@68d5a31bcc03:/notebooks$ whoami ipython ipython@68d5a31bcc03:/notebooks$ id uid=1000(ipython) gid=0(root) groups=0(root) ipython@68d5a31bcc03:/notebooks$ pwd /notebooks ipython@68d5a31bcc03:/notebooks$ env | grep HOME HOME=/home/ipython ipython@68d5a31bcc03:/notebooks$ touch $HOME/magic ipython@68d5a31bcc03:/notebooks$ touch /notebooks ipython@68d5a31bcc03:/notebooks$ ls -las $HOME total 24 4 drwxrwxr-x 4 ipython root 4096 Dec 23 02:26 . 4 drwxr-xr-x 6 root root 4096 Dec 23 02:26 .. 4 -rw-rw-r-- 1 ipython root 220 Dec 23 02:15 .bash_logout 4 -rw-rw-r-- 1 ipython root 3637 Dec 23 02:15 .bashrc 4 drwxrwxr-x 2 ipython root 4096 Dec 23 02:15 .jupyter 0 -rw-r--r-- 1 ipython root 0 Dec 23 02:26 magic 4 -rw-rw-r-- 1 ipython root 675 Dec 23 02:15 .profile Everything in our checks still works okay and running up the actual Jupyter Notebook application also works fine, with us being able to create and save new notebooks. This is what we would expect as the directories and files are owned by the ‘ipython’ user and we are also running as that user. Of note is that you will now see that the effective group of the user is a ‘gid’ of ‘0’. All the directories and files also have that group. If we use the ‘-u ipython’ or ‘-u 1000’ option, where ‘1000’ was the user ID allocated by the ‘adduser’ command in the ‘Dockerfile’, that all works fine as well. For the case of overriding the user with a random user ID, we get: $ docker run --rm -it -u 10000 -p 8888:8888 jupyter-notebook bash I have no name!@dbe290496d44:/notebooks$ whoami whoami: cannot find name for user ID 10000 I have no name!@dbe290496d44:/notebooks$ id uid=10000 gid=0(root) I have no name!@dbe290496d44:/notebooks$ pwd /notebooks I have no name!@dbe290496d44:/notebooks$ env | grep HOME HOME=/home/ipython I have no name!@dbe290496d44:/notebooks$ touch $HOME/magic I have no name!@dbe290496d44:/notebooks$ touch /notebooks/magic I have no name!@dbe290496d44:/notebooks$ ls -las $HOME total 24 4 drwxrwxr-x 4 ipython root 4096 Dec 23 02:32 . 4 drwxr-xr-x 6 root root 4096 Dec 23 02:32 .. 4 -rw-rw-r-- 1 ipython root 220 Dec 23 02:31 .bash_logout 4 -rw-rw-r-- 1 ipython root 3637 Dec 23 02:31 .bashrc 4 drwxrwxr-x 2 ipython root 4096 Dec 23 02:31 .jupyter 0 -rw-r--r-- 1 10000 root 0 Dec 23 02:32 magic 4 -rw-rw-r-- 1 ipython root 675 Dec 23 02:31 .profile Unlike before when overriding with a random user ID with no corresponding user account, the attempts to create files in the file system now works okay. What you will note though is that the file created is in this case owned by user with user ID of ‘10000’. This worked because the effective group of the random user ID was ‘root’, matching what the directory used, along with the fact that the group permissions of the directory allowed updates by anyone in the same group. Thus it didn’t matter that the user ID was different to the owner of the group. One thing you may note is that when the file ‘magic’ was created, the resulting file wasn’t itself writable to the group. This was the case as the default ‘umask’ setup by Docker when a container is run is ‘0022’. This particular ‘umask’ disables the setting of the ‘w’ flag on the group. Even though this is the case, this is not a problem because from this point on any code that would run, such as the actual Jupyter Notebook application, would only ever run as the same allocated user ID. There is therefore no expectation of any processes running as the original ‘ipython’ user needing to be able to update the file. In other words, that directories and files are fixed up to be writable to group only matters for the original directories and files created as part of the Docker build as the ‘ipython’ user. What happens after that and what the ‘umask’ may be is not important. One final check to go, will this updated version of the ‘jupyter/notebook’ Docker image work on OpenShift, and the answer is that it does indeed now start up okay and does not error out due to the problems with access permissions we had before. If we access the running container on OpenShift we can perform the same checks as above okay. $ oc rsh ipython-3-c7oit I have no name!@ipython-3-c7oit:/notebooks$ whoami whoami: cannot find name for user ID 1000210000 I have no name!@ipython-3-c7oit:/notebooks$ id uid=1000210000 gid=0(root) I have no name!@ipython-3-c7oit:/notebooks$ pwd /notebooks I have no name!@ipython-3-c7oit:/notebooks$ env | grep HOME HOME=/home/ipython I have no name!@ipython-3-c7oit:/notebooks$ touch $HOME/magic I have no name!@ipython-3-c7oit:/notebooks$ touch /notebooks/magic I have no name!@ipython-3-c7oit:/notebooks$ ls -las $HOME total 20 4 drwxrwxr-x. 5 ipython root 4096 Dec 23 03:20 . 0 drwxr-xr-x. 3 root root 20 Dec 23 03:13 .. 4 -rw-------. 1 1000210000 root 31 Dec 23 03:20 .bash_history 4 -rw-rw-r--. 1 ipython root 220 Dec 23 03:13 .bash_logout 4 -rw-rw-r--. 1 ipython root 3637 Dec 23 03:13 .bashrc 0 drwxr-xr-x. 5 1000210000 root 64 Dec 23 03:19 .ipython 0 drwxrwxr-x. 2 ipython root 39 Dec 23 03:14 .jupyter 0 drwx------. 3 1000210000 root 18 Dec 23 03:18 .local 0 -rw-r--r--. 1 1000210000 root 0 Dec 23 03:20 magic 4 -rw-rw-r--. 1 ipython root 675 Dec 23 03:13 .profile Named user vs numeric user ID Before we go on to further verify whether the updated Docker image does in fact work properly on OpenShift, I want to revisit the use of the ‘USER’ statement in the ‘Dockerfile’. Right now the ‘USER’ statement is specifying a default user. This user would be used if you were running the Docker image directly with Docker yourself. As we have seen, if used with OpenShift, the user given by the ‘USER’ statement is actually ignored. The reasons that a hosting service such as OpenShift ignores the user specified by the ‘USER’ statement is that it cannot trust that the user is a non ‘root’ user when the user is specified by way of a name. But also because where a host service provides an ability to mount shared persistent volumes into containers it may want to ensure running containers owned by a specific service account, or a project within a service account, have different user IDs as part of ensuring that there is no way an application could see any data stored on a shared volume created by a different user, if a volume was mounted against the wrong container. Now one of the possibilities I did describe in a prior post was that if a hosting service only supported 12 factor applications and didn’t support persistent data volumes, although it should really still prohibit running a container as ‘root’, it may allow a container to run as the user specified by the ‘USER’ statement so long as it knows it isn’t ‘root’. This though it can only know if a numeric user ID was defined with the ‘USER’ statement. To cater for the possibility, rather than use a user name with the ‘USER’ statement, lets use its numeric user ID instead. Now from the above tests we saw that the numeric user ID for the user ‘ipython’ created by ‘adduser’ was ‘1000’. We could therefore use it with the ‘USER’ statement, however, since what ‘adduser’ will use for the user ID is not technically deterministic, as it can be dependent on what other user accounts may already have been created, but also can depend on what operating system is used, we are better off being explicit and telling ‘adduser’ what user ID to use. What exactly the lowest recommended user ID is for normal user accounts looks to be 500 on Posix and Red Hat systems, and 1000 on OpenSuSE and Debian. Lets therefore go with a number 1000 or above, but just in case an operating system image may include at least a default user account, lets skip 1000 and use 1001 instead. Making this change we now end up with the ‘Dockerfile’ being: RUN adduser --disabled-password --uid 1001 --gid 0 --gecos "IPython" ipythonRUN mkdir -m 0775 /notebooks && chown ipython:root /notebooksVOLUME /notebooks WORKDIR /notebooksUSER 1001# Add a notebook profile. RUN mkdir -p -m 0775 ~ipython/.jupyter/ && \ echo "c.NotebookApp.ip = '*'" >> ~ipython/.jupyter/jupyter_notebook_config.pyRUN chmod -R u+w,g+w /home/ipythonENV HOME=/home/ipython All up this should give us a the most portable solution. Working where the Docker container is hosted directly on Docker, but also working on a hosting service such as OpenShift, which uses Docker under the covers, but which overrides the user ID containers run as. Using a numeric user ID for ‘USER’ also allows a hosting service to still used our preferred user if it does not want to allow containers to run as ‘root’, as will know it can trust that it will run as the user ID indicated. Cannot find name for user ID It would be great to say at this point that we are done and everything works fine. That is however not the case as I will go into in the next post. The remaining problem relates to what happens when we run the ‘whoami’ command: $ docker run --rm -it -u 10000 -p 8888:8888 jupyter-notebook bash I have no name!@dbe290496d44:/notebooks$ whoami whoami: cannot find name for user ID 10000 As we can see, ‘whoami’ isn’t able to return a valid value due to the user ID everything runs as not actually matching a user account. In initially running up the updated Docker image this didn’t appear to prevent the IPython Notebook application running, but as we delve deeper we will see that it can actually cause problems. 2 comments: Thanks for your article. Do you have any idea on how to solve this problem ? whoami: cannot find name for user ID 10000 How to solve the issue explained in this post is in the next post in this series:
http://blog.dscpl.com.au/2015/12/random-user-ids-when-running-docker.html
CC-MAIN-2017-30
en
refinedweb
I have this code" Imports System Imports System.Runtime.InteropServices.ComTypes Imports Microsoft.Expression.Web.Interop.Designer Namespace ExpressionWeb4_addin Public MustInherit Class DocumentEventHandler Inherits HTMLDocumentEvents Implements IDisposable HTMLDocumentEvents is not recognized and its a member of the namespace Microsoft.Expression.Web.Interrop.Designer.HTMLDocumentEvents. I can see the class in the object browser. Even if I start typing Microsoft.Expression.Web.Interop.Designer., Intellisense never shows me the members of this namespace. The fact I can see this namespace in the object browser seems to be indicative that I have it referenced correctly. I'm stuck at this line because the entire tool does not seem to recognize this namespace. View Complete Post Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/65847-classes-are-not-getting-recognized-vs2010.aspx
CC-MAIN-2017-30
en
refinedweb
1.) Willingness to learn 2.) Know the basics of a computer Instructions: Before we begin your going to need a program to start programming with go to you will be presented with a bunch of downloads, but should download you one that is presented in this picture and you will know which is the right one to download. Assuming you have done the installation process we can now begin ! Part 1.) Let's begin: Before we begin, you should have Eclipse open and start a New project Then we're going to need classes which i will explain later When creating classes, we allways need 1 main class so the project knows where to begin Once were done we should automatically have what looks like this Part 2.) Lesson: Here are some things you should know before you continue onto our exercise! Definitions: Variable - is a facility for storing data. Class - Java is class-based programming, which refers to the style of object-oriented programming in which inheritance is achieved by defining classes of objects. Primitive data types: int - aka Integer, you've probably heard if you have ever been in a Math class but in programming it's a little bit different. It can be a negative, neutral or positive number. been in a Math class but in programming it's a little bit different. It can be a negative, neutral or positive number. String - Is basically a string of words or letters example: "Hello world!" example: "Hello world!" Boolean - Your probably thinking "Wow boolean thats a weird word". It really just has one of two values; true or false. It really just has one of two values; true or false. Note* - There are more data types but they will not be discuss as this is a begineer lesson. Statement structure: A class is made up of a bunch of different statement, the structure for creating a statement goes like this... (data type) (variable name) = (data value); Your probably woundering what the ";" semicolon is doing there, in Java we allways end our statements with semicolon. An example of assignment statements are: int tax = 10; String str = "Hello world!"; Strings should allways have quotations marks around them. To print something to the console to the user can see it we type "System.out.println();" and inside the parenthesis we have what we want to display. Part3.) Exercise: Let's begin making our very first console program ! This is a prime example of a very simple console program named "Hello world!" Input: public class NameOfClasshere { public static void main(String[] args) { int anumber = 10; String hi = "Hello World! "; // Creating a variable named "hi" that hold the text "Hello world!" System.out.println(hi + anumber); // Calling "hi" and "anumber" variable to display to the user } } Output:
http://www.dreamincode.net/forums/topic/158905-introduction-to-java-with-eclipse/
CC-MAIN-2017-30
en
refinedweb
Next Thread | Thread List | Previous Thread Start a Thread | Settings Hi All, I just started learning ARM and I got a STM32L0 Nucleo board with Keils uVision5 as the environment. I used STM32Cube to generate the start-up code and got a blinky example and a button external interrupt example working. I'm trying to get a Timer Interrupt example working by following this tutorial: But I run into some definition problem, error as: CubeProjectOne Configuration\CubeProjectOne Configuration.axf: Error: L6218E: Undefined symbol HAL_TIM_Base_Init (referred from main.o). CubeProjectOne Configuration\CubeProjectOne Configuration.axf: Error: L6218E: Undefined symbol HAL_TIM_Base_Start_IT (referred from main.o). I checked that HAL_TIM_Base_Init and HAL_TIM_Base_Start_IT are located in stm32l0xx_hal_tim.h line 1152, 1153 as: HAL_StatusTypeDef HAL_TIM_Base_Init(TIM_HandleTypeDef *htim); HAL_StatusTypeDef HAL_TIM_Base_DeInit(TIM_HandleTypeDef *htim); But in stm32l0xx_hal_tim.h line 47 (47 #include "stm32l0xx_hal_def.h") there's a red cross and when I hover over it, it says "error in include chain stm32l0xx_hal_rcc_ex.h: unknown type name 'HAL_StatusTypeDef." that repeated for a whole bunch of different header files. Then in file "stm32l0xx_hal_def.h", which is where HAL_StatusTypeDef is located, it has the same error "error in include chain stm32l0xx_hal_rcc_ex.h: unknown type name 'HAL_StatusTypeDef." at line 48 (48 #include "stm32l0xx.h"). I'm not sure if it's a linker problem, multiple definition or what, but I really can't figure out why. Please see my main code as attached. Thanks! /* Includes ------------------------------------------------------------------*/ #include "stm32l0xx.h" #include "stm32l0xx_hal.h" #include "stm32l0xx_hal_tim.h" // For TIM_HandleTypeDef /* USER CODE BEGIN PV */ volatile uint32_t blink_period = 500; TIM_HandleTypeDef TIM_Handle; /* Set Up Timer --------------------------------------------------------------*/ void Timer_SetUp (void) { // 1. Enable Timer __TIM2_CLK_ENABLE(); // 2. set up to toggle at 500 ms TIM_Handle.Init.Prescaler = 15; TIM_Handle.Init.CounterMode = TIM_COUNTERMODE_UP; TIM_Handle.Init.Period = 62499; // 3. Specify HW timer to be used TIM_Handle.Instance = TIM2; // 4. Initialise and start interrupt HAL_TIM_Base_Init(&TIM_Handle); // Init timer HAL_TIM_Base_Start_IT(&TIM_Handle); // start timer interrupts // 5. Unmask timer HAL_NVIC_SetPriority(TIM2_IRQn, 0, 1); HAL_NVIC_EnableIRQ(TIM2_IRQn); } /* Timer handler -------------------------------------------------------------*/ void TIM4_IRQHandler(void) { __HAL_TIM_CLEAR_FLAG(&T2_Handle, TIM_FLAG_UPDATE); /*Some code here */ } /** System Clock Configuration */ void SystemClock_Config(void) { RCC_OscInitTypeDef RCC_OscInitStruct; RCC_ClkInitTypeDef RCC_ClkInitStruct; __PWR_CLK_ENABLE(); __HAL_PWR_VOLTAGESCALING_CONFIG(PWR_REGULATOR_VOLTAGE_SCALE1); RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_MSI; RCC_OscInitStruct.MSIState = RCC_MSI_ON; RCC_OscInitStruct.MSICalibrationValue = 0; RCC_OscInitStruct.MSIClockRange = RCC_MSIRANGE_5; RCC_OscInitStruct.PLL.PLLState = RCC_PLL_NONE; HAL_RCC_OscConfig(&RCC_OscInitStruct); RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_SYSCLK; RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_MSI; RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1; RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV1; RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV1; HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_0); HAL_SYSTICK_Config(HAL_RCC_GetHCLKFreq()/1000); HAL_SYSTICK_CLKSourceConfig(SYSTICK_CLKSOURCE_HCLK); } Thanks for the reply, but I somehow managed to overwrite my entire project when I re-generated code from STM32Cube and everything got lost. The error was gone though, thanks. I run into the same problem when I'm trying to include the RTC module. After fiddling around forever, this is what I did to solve the issue: 1. Add #define HAL_RTC_MODULE_ENABLED stm32l0xx_hal_conf.h 2. Add #define USE_HAL_DRIVER to stm32l0xx.h 3. Project -> Manage -> Project Items -> Application/User add my own rtc.c 4. Project -> Manage -> Project Items -> Drivers/STM32L0xx_HAL_Driver add stm32l0xx_hal_rtc.c and stm32l0xx_hal_rtc_ex.c I didn't know you have to do step 3 and 4 manually, I thought the linker will automatically add it but guess I was wrong. Anyway, it finally works now. And it seems like Step 2 is just a precaution if you haven't included the _hal.h, not sure which is the better way though... In the end, this is my include chain: main.c includes main.h and rtc.h main.h includes stm32l0xx.h (OR stm32l0xx_hal.h) rtc.h includes stm32l0xx.h (OR stm32l0xx_hal.h) stm32l0xx_hal.h includes stm32l0xx_hal_conf.h stm32l0xx_hal_conf.h includes stm32l0xx_hal_rtc.h And have the code to prevent recursive inclusion everywhere... Thanks! :) Hi, I think that more proper way to do it is by adding to Your Run-Time environment HAL driver for TIM, rather than manually adding source files. In that case proper source files will be automatically added to Your project, and I recommend to do it with all of Yours components. You can do it by entering: Project -> Manage -> Run-Time Environment. Thank you very much for this answer! I had the same problem with a STM32F4, and could solve it by following these steps. The command line "#define HAL_TIM_MODULE_ENABLED" of the stm32f4xx_hal_conf.h file was commented on my project. It was sufficient to decomment it, and the linking worked..
http://www.keil.com/forum/60133/
CC-MAIN-2017-30
en
refinedweb
So you figured out Where Flux Went Wrong and are shipping your app with Redux. How will you measure usage? Will you know how users are using it once it’s launched? What about user authentication? You’ll definitely want to track that. But how will you do it? Will you add Google Analytics integration? Perhaps you’re planning to rock your own dashboards with Keen.io . But what about heat maps, page read statistics, e-commerce integration, or the ability to watch video recordings of your users? Which tools will you choose? How many integrations will you need? How well will those integrations work with Redux? If you’re asking these questions it’s a good thing. But the analytics tooling space is vast. And with so many options, which will you choose? I’m going to let you in on a little tip: don’t choose any of them, choose as many as you can and start experimenting. In this post you will learn how to enhance your reducers to connect your app to hundreds of analytics tooling integrations with a single dependency, at little to no cost and with very minimal effort. Analytics is about learning Have you ever heard of Segment? If not, let me give you the skinny. Segment is an analytics aggregation service providing hundreds of analytics tooling integrations with the toggle of a switch. It’s reliable and easy to work with. So easy, in fact, I added it to my blog . Their pricing model is moderate, even at scale. And, best of all, you can try out dozens of popular integrations such as Google Analytics for free, without any development experience needed. Redux users integrating with Segment are in luck! Turns out the team over at Rangle have graciously open sourced a slick middleware library called segment-redux , making Redux integration with Segment so easy you could do it with a hand tied behind your back. Huge props to @bertrandk on his efforts maintaining it. Let’s install the middleware and start tracking, shall we? Installing the Redux middleware Unless you’re using a space-age abstraction like Redux Providers & Replicators , or aren’t even using React at all, you’re probably using something like React Router alongside react-router-redux to handle routing in your application. If so, you’ll be tickled to know that, once the middleware is installed, you’ll get application page view reporting out of the box. Install the middleware packagewith npm i -S redux-segment and follow along with the two-step install instructions to connect it with Redux. The middleware installation instructions currently assume you’re building a fat client JS app and will be using Analytics.js . But you could just as easily swap in analytics-node if you’re building a Node app instead. Using redux-thunk for async data flow? No problem, the middleware supports that too. Connecting with Segment If you’ve already signed up with an account and create a Project for your app in Segment all you need to do is _.plop the WRITE_KEY provided into your app while initializing, and let the Redux middleware do the rest. The key is safe to share, so you don’t need to worry aboutlocking it up. Sending an event Once you’ve installed the redux-segment middleware and connected with Segment you can start tracking events immediately. As mentioned earlier, users of React Router Redux will enjoy out of the box support and will see Page events start flying the moment they nagivate between routes. Additional events can be set-up with just a few lines of code in your reducer. Here’s an example Identify event we use to track user login events on some of our React apps here at TA: import { EventTypes } from 'redux-segment' export const loginSuccess = (token, user) => ({ type: SESSION_LOGIN_SUCCESS, payload: { token, user, }, meta: { analytics: { eventType: EventTypes.identify, eventPayload: { userId: user.id, traits: (() => { const { email, firstName, imageUrl, lastName, phoneNumber } = user return { avatar: imageUrl, email, firstName, lastName, phone: phoneNumber, } })(), }, }, }, }) Notice how we’ve taken a simple reducer and added a meta property called analytics , along with an event type and payload. That’s all there is to it! See the middleware Usage section for different event types, supported routers and additional documentation. Track to your heart’s content Now that you have an arsenal of analytics tooling integrations at your fingertips you can ask your stakeholders what they want to measure, instead of asking them how they want to go about measuring it. I’ve used Segment in the past to set-up and scale several single-page apps tracked by Google Analytics, track application errors with Sentry and watch users navigate a site to discover UX issues. Currently I’m working on an AWS client-side monitoring solution leveraging Lambda and webhooks. And I get geeked every time someone shows me some new, so please don’t hesitate to gush about your favorite analytics tools in the comments » App Analytics with Redux 评论 抢沙发
http://www.shellsec.com/news/5579.html
CC-MAIN-2017-30
en
refinedweb
Opened 10 years ago Closed 4 years ago #5241 closed Bug (fixed) Response middleware runs too early for streaming responses Description In order to output very large HTML files or do progresive rendering I let a view return an iterator. The actual HTML file is generated at the last step os request processing. It works fine as long as I don't use gettext to translate any variable/string/template/... during generation. If I do, I always get the default translation. I hope the following code snippet will clarify what I mean: def test(request): def f(): print dict(request.session) yield "<html><body>\n" for lang in ('es', 'en'): yield '<a href="/i18n/setlang/?language=%(lang)s">%(lang)s</a><br>'%locals() for i in range(10): yield (_('Current time:')+' %s<br>\n')%datetime.datetime.now() time.sleep(1) yield "Done\n</body></html>\n" return HttpResponse(f()) In this case 'Current time:' is never translated (it is easy to fix in this case but not in others). I found the problem and the patch working with Django 0.96 but I believe it also applies to the development version. Attachments (1) Change History (18) Changed 10 years ago by comment:1 Changed 10 years ago by comment:2 Changed 9 years ago by Passing iterators to HttpResponse and hoping they won't be entirely pulled into memory is not supported. There are too many pieces of middleware and other code that assume they can do random (or at least repeatable) access to the response.content attribute. So it's not worth doing this kind of workaround piecemeal, without a clear plan as to how to write iterator-aware middleware (which I suspect is close to impossible, given the need to access things like the length, in many cases) or specify that certain pieces of middleware shouldn't be run. comment:3 Changed 8 years ago by I am working on a site (my own) that needs translations for both Dutch and English. I found this patch to work for situations where long translation strings are used in templates in combination with for loops. For example, in my case, I have some text (not even too long, but it "expands" to multiple lines in the .po file), and a for loop: {% trans "SmashBits werkt aan software. Voor op het web, of voor op de Mac. Over het laatste binnenkort meer." %} {% comment %} {% for lang in LANGUAGES %} <div class="lang{% if forloop.first %} first{% endif %}"> <form action="/i18n/setlang/" method="POST" name="Form_{{ lang.0 }}"> <input name="next" type="hidden" value="/"> <input type="hidden" name="language" value="{{ lang.0 }}"> <a href="#" onclick="document.Form_{{ lang.0 }}.submit();">{{ lang.1 }}</a> </form> </div> {% endfor %} {% endcomment %} <div class="lang first"> <form action="/i18n/setlang/" method="POST" name="Form_en"> <input name="next" type="hidden" value="/"> <input type="hidden" name="language" value="en"> <a href="#" onclick="document.Form_en.submit();">English</a> </form> </div> <div class="lang"> <form action="/i18n/setlang/" method="POST" name="Form_nl"> <input name="next" type="hidden" value="/"> <input type="hidden" name="language" value="nl"> <a href="#" onclick="document.Form_nl.submit();">Nederlands</a> </form> </div> The text is in Dutch, but will be translated when the user has English as the default language (as can be set by using one of the two forms). The trick is that when I use the code in the {% comment %} block, the text is not translated, but when I simply write it all down, translating works! I don't have too much Python experience, let alone that I know anything about the Django architecture, but the patch works! I reopened this issue because I think this is an issue with the iterator of the template, i.e. the for tag. (patch/reproduction done on 1.0.2.) comment:4 Changed 8 years ago by This isn't "ready for checkin" for so many reasons. Firstly, the patch isn't very good (there shouldn't be any reason to use new.instancemethod; it can be done similarly to other places where methods are created in Django). Secondly, this is only one tiny piece of allowing iterators to work with HttpResponses and piecemeal work on that isn't particularly useful at this point. They are simply unsupported right now, as I noted above. "The patch works" is only one part of any solution. It has to solve a problem properly, not just make the symptom go away. comment:5 Changed 8 years ago by comment:6 Changed 8 years ago by comment:7 Changed 6 years ago by comment:8 Changed 5 years ago by Change UI/UX from NULL to False. comment:9 Changed 5 years ago by Change Easy pickings from NULL to False. comment:10 follow-up: 11 Changed 5 years ago by To be clear, the problem is that LocaleMiddleware.process_response deactivates the current translation, and that everything that runs afterwards uses the default locale. 88b1183425631002a5a8c25631b1b1fad7eb23c5 (the commit on the http-wsgi-improvements branch) isn't acceptable because it removes HttpResponse's current ability to stream. People already use streaming responses (even though they're fragile), and #7581 will add official support for them. But improvements to streaming responses aren't going to fix this: process_responseruns before beginning to send the response; - the point of streaming responses is to generate content on the fly while sending the response. Stupid question — why does LocaleMiddleware need to deactivate the translation in process_response? Couldn't it leave the translation in effect until the next request? This behavior has been here since the merge of the i18n branch: comment:11 Changed 5 years ago by Stupid question — why does LocaleMiddlewareneed to deactivate the translation in process_response? Couldn't it leave the translation in effect until the next request? This behavior has been here since the merge of the i18n branch: +1 to remove translation deactivation, unless the test suite proves we're wrong. comment:12 Changed 5 years ago by comment:13 Changed 5 years ago by Simply removing translation.deactivate() causes some failures, but that's more a lack of isolation in the test suite than anything else. comment:14 Changed 5 years ago by The general problem is that middleware's process_response (and possibly process_exception) run before the content of a streaming response has been generated. This doesn't match the current assumption that these middleware run after the response is generated. At least the transaction middleware suffers from the same problem. If we can't fix both at the same time, we should open another ticket for that one. Off the top of my head -- we could: - run these middleware methods at a later point, - introduce a new middleware method for streaming responses. #19519 is related, but it's a different problem -- it's about firing the request_finished signal. comment:15 Changed 4 years ago by comment:16 Changed 4 years ago by After thinking about this problem for some time, I don't think it's feasible to run the middleware after starting a streaming response. Since you already started sending output: - You cannot change headers — the primary task of process_response - You cannot handle exceptions gracefully — the primary task of process_exception So the answer here is to stop deactivating the translation in process_response. Patch
https://code.djangoproject.com/ticket/5241
CC-MAIN-2017-30
en
refinedweb
vdiff 2.4.0 Efficiently manage the differences between two files using vim. Opens two files in vimdiff and provides single-stroke key mappings to make moving differences between two files efficient. Up to two additional files may be opened at the same time, but these are generally used for reference purposes. Usage vdiff [options] <file1> <file2> [<file3> [<file4>]] Options Relevant Key Mappings Defaults Defaults will be read from ~/.config/vdiff/config if it exists. This is a Python file that is evaluated to determine the value of three variables: vimdiff, gvimdiff, and gui. The first two are the strings used to invoke vimdiff and gvimdiff. The third is a boolean that indicates which should be the default. If gui is true, gvimdiff is used by default, otherwise vimdiff is the default. An example file might contain: vimdiff = 'gvimdiff -v' gvimdiff = 'gvimdiff -f' gui = True These values also happen to be the default defaults. As a Package You can also use vdiff in your own Python programs. To do so, you would do something like the following: from inform import Error from vdiff import Vdiff vdiff = Vdiff(lfile, rfile) try: vdiff.edit() except KeyboardInterrupt: vdiff.cleanup() except Error as err: err.report() Installation Runs only on Unix systems. Requires Python 3.5 or later. Install by running ‘./install’ or ‘pip3 install vdiff’. - Author: Ken Kundert - Download URL: - Keywords: vim,diff - License: GPLv3+ - Categories - Development Status :: 5 - Production/Stable - Environment :: Console - Intended Audience :: End Users/Desktop - License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+) - Natural Language :: English - Operating System :: POSIX :: Linux - Programming Language :: Python :: 3.5 - Topic :: Utilities - Package Index Owner: kenkundert - DOAP record: vdiff-2.4.0.xml
https://pypi.python.org/pypi/vdiff/
CC-MAIN-2017-51
en
refinedweb
#include <iostream> template <int N> struct X { enum {result = 2 * X<N-1>::result};}; template <> struct X<0> { enum {result = 1};}; int main() { std::cout << X<16>::result << std::endl; return 0; } This is an example of metaprogramming with C++ templates. The code would be more clear if I changed the "X" to "Power". At the 1994 C++ Standardization Committee Erwin Unruh showed that compilers can do more complex calcuations. It was latter shown that C++ Template Metaprogramming is Turing Complete, meaning that it can compute anything that is computable. This is in theory. In practice, the C++ Standard recommends 17 levels of recursion. The above code uses 16. For a more detail on Template Metaprogramming see: Reference: C++ Templates
http://cpptrivia.blogspot.com/2010/11/metaprogramming.html
CC-MAIN-2017-51
en
refinedweb
I come across the below in some code from an ex employee. the code is not called, from anywhere, but my question is can it actually do something useful as it is? def xshow(x): print("{[[[[]}".format(x)) That is a format string with an empty argument name and an element index (the part between [ and ] for a key [[[ (those indices don't have to be integers). It will print the value for that key. Calling: xshow({'[[[': 1}) will print 1
https://codedump.io/share/3dSA1Dw4Hx4n/1/what-does-this-strange-format-string-quotquot-do
CC-MAIN-2017-51
en
refinedweb
in my wxPython app which I am developing I have written a method which will add a new record into an access database (.accdb). I have procured this code from online search however am not able to make it work. Below is the code:- def Allocate_sub(self, event): pth = os.getcwd() myDb = pth + '\\myAccessDB.accdb' DRV = '{Microsoft Access Driver (*.mdb)}' PWD = 'pw' # connect to db con = win32com.client.Dispatch(r'ADODB.Connection') con.Open('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s' % (myDb)) cDataset = win32com.client.Dispatch(r'ADODB.Recordset') #cDataset.Open("Allocated_Subs", con, 3, 3, 1) cDataset.Open("Allocated_Subs", con, 3, 3, 1) cDataset.AddNew() cDataset.Fields.Item("Subject").Value = "abc" cDataset.Fields.Item("UniqueKey").Value = "xyzabc" cDataset.Update() cDataset.close() con.close() I figured the solution, posting here just in case someone refers to it... it's a small correction in line cDataset.Open("Allocated_Subs", con, 3, 3, 1) it should be:- cDataset.Open("Allocated_Subs", con, 1, 3) Regards, Premanshu
https://codedump.io/share/YYH0MqUqtDXS/1/python-to-open-adodb-recordset-and-add-new-record
CC-MAIN-2017-51
en
refinedweb
I have created the following Java program. Its basic functionality is to perform addition, subtraction, multiplication, division and modular division on two numbers. I have implemented the concept of object-oriented programming, but it is missing encapsulation. How do I introduce encapsulation in it? My code is: /* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ /** * * @author piyali */ public class Calculator { /** * @param args the command line arguments */ public static void main(String[] args) { // TODO code application logic here int x, y; x = 13; y = 5; calculation add = new calculation(); calculation sub = new calculation(); calculation mul = new calculation(); calculation div = new calculation(); calculation mod = new calculation(); int addResult = add.addition(x, y); int subResult = sub.subtraction(x, y); int mulResult = mul.multiplication(x, y); int divResult = mul.division(x, y); int modResult = mod.modularDivision(x, y); System.out.println("The addition of the numbers is " +addResult); System.out.println("The subtraction of the two numbers is " +subResult); System.out.println("The multiplication of the two numbers is " + mulResult); System.out.println("The division of the two numbers is " +divResult); System.out.println("The modular division of the two numbers is " + modResult); } } class calculation { int addition(int x, int y){ int z; z = x + y; return(z); } int subtraction(int x, int y){ int z; z = x - y; return(z); } int multiplication(int x, int y){ int z; z = x * y; return(z); } int division(int x, int y){ int z; z = x / y; return(z); } int modularDivision(int x, int y){ int z; z = x % y; return(z); } } Well if you want true OOP and encapsulation, then create interface Calculation which has a method int calculate(). public interface Calculation { int calculate(); } Now create classes which implements this interface such as Addition or Subtraction etc. public class Addition implements Calculation { private final int x; private final int y; public Addition(int x, int y) { this.x = x; this.y = y; } @Override public int calculate(){ return x + y; } } Main method public static void main(String[] args) { int x, y; x = 13; y = 5; Calculation add = new Addition(x, y); System.out.println(add.calculate()); } Advantages of such design is that if you will want to add any extra mathematical operations such as root, percentage or even derivation, you will not need to modify any implementation of class. Just write extra class which implements Calculation.
https://codedump.io/share/9VbblMak4LTJ/1/how-do-i-introduce-encapsulation-in-the-following-java-program
CC-MAIN-2017-51
en
refinedweb
Details Description. Activity - All - Work Log - History - Activity - Transitions Here's a potential fix, which is really klugey. If no one objects to this I'll upload it as a real patch in the next couple of days: diff --git a/src/mapred/org/apache/hadoop/mapred/JobConf.java b/src/mapred/org/apache/hadoop/mapred/JobConf.java index 11be95a..4794eab 100644 --- a/src/mapred/org/apache/hadoop/mapred/JobConf.java +++ b/src/mapred/org/apache/hadoop/mapred/JobConf.java @@ -1452,6 +1452,13 @@ public class JobConf extends Configuration { if (toReturn.startsWith("file:")) { toReturn = toReturn.substring("file:".length()); } + // URLDecoder is a misnamed class, since it actually decodes + // x-www-form-urlencoded MIME type rather than actual + // URL encoding (which the file path has). Therefore it would + // decode +s to ' 's which is incorrect (spaces are actually + // either unencoded or encoded as "%20"). Replace +s first, so + // that they are kept sacred during the decoding process. + toReturn = toReturn.replaceAll("\\+", "%2B"); toReturn = URLDecoder.decode(toReturn, "UTF-8"); return toReturn.replaceAll("!.*$", ""); } Are there any other characters that need to be fixed? I'm worried that this is a point solution for one character rather than the right solution. Owen: that's the only difference I know of that's mentioned in the spec: I spent about 4 hours trying to find a portable non-klugey fix. The trouble is that the behavior is different on Windows compared to Linux. On a Windows JVM, it encodes spaces as "%20" and +s as "%2B" and on Linux it does neither, best I can tell. I definitely agree that this fix is not optimal, but I think '+' is the most common case for a "weird" character in a JAR name. In Debian and RPM packages, the non-alphanumeric characters allowed in version numbers are [+-~.:] and I think all of those should be fine after this patch. Of course, you could detect the operating system like Path does. Look at the definition of Path.WINDOWS. Of course, we probably should make a method that checks that boolean... That said, I'm ok with the quoting as long as it works on both operating systems. (I agree, it is kludgy, but...) passed system test framework compile. The findbugs warnings are bogus (none related to this patch). The release audit warnings are also unrelated ("smoke-tests" file and "robots.txt" file). See MAPREDUCE-2172. +1 This patch looks good to me. Looks like there may be other potential incorrect uses of URLDecode in the HarFileSystem, minding filling a jira to fix those? Looks like there may be other potential incorrect uses of URLDecode in the HarFileSystem, minding filling a jira to fix those? Looking at HarFileSystem, it seems like it's actually safe - it's using URLDecoder to decode something that was encoded using URLEncoder in HadoopArchives.java. Using these as a pair is OK: groovy:000> URLDecoder.decode(URLEncoder.encode("foo+bar baz 100%")) ===> foo+bar baz 100% Integrated in Hadoop-Mapreduce-trunk-Commit #565 (See) MAPREDUCE-714. JobConf.findContainingJar unescapes unnecessarily on linux. Contributed by Todd Lipcon Integrated in Hadoop-Mapreduce-22-branch #33 (See) Integrated in Hadoop-Mapreduce-trunk #643 (See) I just confirmed this on Windows vs Linux. On Windows, the URL that you get back from ClassLoader.getResource has spaces encoded as "%20". On Linux it doesn't. Anyone have a creative solution to deal with this? We'd like to have +s in our version numbers due to standards in RPM and Debian land, but this is blocking that.
https://issues.apache.org/jira/browse/MAPREDUCE-714
CC-MAIN-2017-51
en
refinedweb
1 /* 2 * Copyright (c) 2005,.ws; 27 28 /** The <code>WebServiceException</code> class is the base 29 * exception class for all JAX-WS API runtime exceptions. 30 * 31 * @since JAX-WS 2.0 32 **/ 33 34 public class WebServiceException extends java.lang.RuntimeException { 35 36 /** Constructs a new exception with <code>null</code> as its 37 * detail message. The cause is not initialized. 38 **/ 39 public WebServiceException() { 40 super(); 41 } 42 43 /** Constructs a new exception with the specified detail 44 * message. The cause is not initialized. 45 * @param message The detail message which is later 46 * retrieved using the getMessage method 47 **/ 48 public WebServiceException(String message) { 49 super(message); 50 } 51 52 /** Constructs a new exception with the specified detail 53 * message and cause. 54 * 55 * @param message The detail message which is later retrieved 56 * using the getMessage method 57 * @param cause The cause which is saved for the later 58 * retrieval throw by the getCause method 59 **/ 60 public WebServiceException(String message, Throwable cause) { 61 super(message,cause); 62 } 63 64 /** Constructs a new WebServiceException with the specified cause 65 * and a detail message of <tt>(cause==null ? null : 66 * cause.toString())</tt> (which typically contains the 67 * class and detail message of <tt>cause</tt>). 68 * 69 * @param cause The cause which is saved for the later 70 * retrieval throw by the getCause method. 71 * (A <tt>null</tt> value is permitted, and 72 * indicates that the cause is nonexistent or 73 * unknown.) 74 **/ 75 public WebServiceException(Throwable cause) { 76 super(cause); 77 } 78 79 }
http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jaxws/src/share/jaxws_classes/javax/xml/ws/WebServiceException.html
CC-MAIN-2017-51
en
refinedweb
Sending and receiving multiple streams together with only one thread. More... #include <ortp/rtpsession.h> #include <sys/time.h> #include <sys/types.h> #include <unistd.h> Sending and receiving multiple streams together with only one thread. Removes the session from the set. This macro tests if the session is part of the set. 1 is returned if true, 0 else. This macro adds the rtp session to the set. Frees a SessionSet. Destroys a session set. Allocates and initialize a new empty session set. This function performs similarly as libc select() function, but performs on RtpSession instead of file descriptors. session_set_select() suspends the calling process until some events arrive on one of the three sets passed in argument. Two of the sets can be NULL. The first set recvs is interpreted as a set of RtpSession waiting for receive events: a new buffer (perhaps empty) is availlable on one or more sessions of the set, or the last receive operation with rtp_session_recv_with_ts() would have finished if it were in blocking mode. The second set is interpreted as a set of RtpSession waiting for send events, i.e. the last rtp_session_send_with_ts() call on a session would have finished if it were in blocking mode. When some events arrived on some of sets, then the function returns and sets are changed to indicate the sessions where events happened. Sessions can be added to sets using session_set_set(), a session has to be tested to be part of a set using session_set_is_set().
http://www.linphone.org/docs/ortp/sessionset_8h.html
CC-MAIN-2017-51
en
refinedweb
In this example we will subscribe to an MQTT broker and topic. Again we use the same CloudMQtt, arduino libraries and MQTTlens chrome app that we used in the previous example This is mainly a code example”; #include <WiFi.h> #include <PubSubClient.h> const char* ssid = "wifi username"; const char* password = "wifi password"; const char* mqttServer = "m20.cloudmqtt.com"; const int mqttPort = 17914; const char* mqttUser = "penfkmby"; const char* mqttPassword = "R2C9F3SvwAGS"; WiFiClient espClient; PubSubClient client(espClient); void callback(char* topic, byte* payload, unsigned int length) { Serial.print("Message arrived in topic: "); Serial.println(topic); Serial.print("Message:"); for (int i = 0; i < length; i++) { Serial.print((char)payload[i]); } Serial.println(); Serial.println("-----------------------"); } void setup() { Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.println("Connecting to WiFi.."); } Serial.println("Connected to the WiFi network"); client.setServer(mqttServer, mqttPort); client.setCallback(callback); while (!client.connected()) { Serial.println("Connecting to MQTT..."); if (client.connect("ESP32Client", mqttUser, mqttPassword )) { Serial.println("connected"); } else { Serial.print("failed with state "); Serial.print(client.state()); delay(2000); } } client.subscribe("esp32/esp32test"); } void loop() { client.loop(); } Testing You will need to open the Serial monitor to see any messages sent from the topic, in MQTTlens you will have to publish to the correct topic In the screen capture below I have tried to show this, I have sent the messages and the Arduino serial monitor is displaying them as they arrive
http://www.esp32learning.com/code/subscribing-to-mqtt-topic-using-an-esp32.php
CC-MAIN-2018-51
en
refinedweb
docs.intersystems.com / System Administration / Monitoring Guide / About This Book Monitoring Guide About This Book [Back] Search : This book describes several tools available for monitoring InterSystems IRIS™. The following chapters describe how to monitor InterSystems IRIS with tools included with InterSystems IRIS: Monitoring InterSystems IRIS Using the Management Portal describes how to monitor the many metrics displayed on the System Dashboard of the Management Portal, which shows you the state of your InterSystems IRIS instance at a glance. Using the Diagnostic Report describes how to generate a Diagnostic Report and send it to the WRC. Using Log Monitor Log Monitor monitors the InterSystems IRIS messages log for entries of the configured severity and generates a notification for each. These notifications are written to the alerts log by default but can instead be sent by email to specified recipients. Using System Monitor System Monitor is a flexible, user-extensible utility used to monitor an InterSystems IRIS instance and generate notifications when the values of one or more of a wide range of metrics indicate a potential problem. System Monitor includes a status and resource monitor, which samples important system status and resource usage indicators and generates notifications based on fixed statuses and thresholds; Health Monitor, which samples key system and user-defined metrics and generates notifications based on user-configurable parameters and established normal values; and Application Monitor, which samples significant system metrics, stores them in local namespace globals, and generates notifications based on user-created alert definitions. Alert notifications written to the messages log by System Monitor and Health Monitor can be sent by email using Log Monitor; Application Monitor alerts can be configured for email by the user. The following chapters describe how to monitor InterSystems IRIS History Monitor The following appendixes describe how to monitor InterSystems IRIS with various third-party tools: Monitoring InterSystems IRIS Using BMC PATROL Monitoring InterSystems IRIS Using SNMP Monitoring InterSystems IRIS Using WMI Monitoring InterSystems IRIS Using Web Services Monitoring InterSystems IRIS Using the irisstat Utility For detailed information, see the Table of Contents . [Back] [Top of Page]   © 1997-2018 InterSystems Corporation, Cambridge, MA Content for this page loaded from GCM.xml (file updated 2018-12-09 01:16:30)
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_preface
CC-MAIN-2018-51
en
refinedweb
to WebDriver implementations – Firefox, Google Chrome, Internet Explorer and RemoteWebDriver. from selenium import webdriver Step 2: Create a Firefox driver instance driver = webdriver.Chrome();.Chrome(); driver.get(""); Save this in a python file ‘OpenUrlInChromeBrowser.py’ and execute. Note: Refer previous article for Firefox to set driver executable path. To add driver folder path to environment path, use following command in Linux export PATH=$PATH:<folderpath> Example: export PATH=$PATH:/data/WorkArea/BrowserDrivers Method 2 There is another way to start Chrome browser without PATH settings. We have seen this for Firefox Browser Instead of setting PATH environment variable, executable_path property can be passed while invoking Chrome instance. Syntax: driver = webdriver.Chrome(executable_path=”[path to driver location]”); from selenium import webdriver driver = webdriver.Chrome(executable_path="/data/Drivers/chromedriver"); driver.get("");
http://allselenium.info/test-execution-chrome-using-python-selenium/
CC-MAIN-2018-51
en
refinedweb
Used to hold and unique data used to represent #line information. More... #include "clang/Basic/SourceManagerInternals.h" Used to hold and unique data used to represent #line information. Definition at line 81 of file SourceManagerInternals.h. Definition at line 122 of file SourceManagerInternals.h. Add a new line entry that has already been encoded into the internal representation of the line table. Definition at line 264 of file SourceManager.cpp. Referenced by clang::serialization::reader::ASTDeclContextNameLookupTrait::ReadDataInto(). Add a line note to the line table that indicates that there is a #line or GNU line marker at the specified FID/Offset location which changes the presumed location to LineNo/FilenameID. If EntryExit is 0, then this doesn't change the presumed #include stack. If it is 1, this is a file entry, if it is 2 then this is a file exit. FileKind specifies whether this is a system header or extern C system header. Definition at line 210 of file SourceManager.cpp. References clang::LineEntry::get(), and Offset. Definition at line 124 of file SourceManagerInternals.h. Definition at line 96 of file SourceManagerInternals.h. Referenced by clang::SourceManager::clearIDTables(). Definition at line 125 of file SourceManagerInternals.h. Find the line entry nearest to FID that is before it. FindNearestLineEntry - Find the line entry nearest to FID that is before it. If there is no line entry before Offset in FID, returns null. If there is no line entry before Offset in FID, return null. Definition at line 245 of file SourceManager.cpp. Referenced by clang::SourceManager::getFileCharacteristic(), clang::SourceManager::getPresumedLoc(), and clang::SourceManager::isInMainFile(). Definition at line 104 of file SourceManagerInternals.h. Referenced by clang::SourceManager::getPresumedLoc(). Definition at line 197 of file SourceManager.cpp. Referenced by clang::serialization::reader::ASTDeclContextNameLookupTrait::ReadDataInto(). Definition at line 109 of file SourceManagerInternals.h. References clang::LineEntry::FileKind, clang::LineEntry::FilenameID, clang::LineEntry::LineNo, and Offset.
http://clang.llvm.org/doxygen/classclang_1_1LineTableInfo.html
CC-MAIN-2018-51
en
refinedweb
In this example we will connect to an MQTT topic, I used a Wemos Lolin32 – you can use any ESP32 development board We used cloudmqtt which has a free option and then create an instance, you would see something like this ”; Now if we click on the instance that we created you can find the information you need to enter for the MQTT server i have removed the username and password from the image below but this will give you an idea of what you will see Here is the complete code example #include <WiFi.h> #include <PubSubClient.h>"; WiFiClient espClient; PubSubClient client(espClient);); } } client.publish("esp32/esp32test", "Hello from ESP32learning"); } void loop() { client.loop(); } Open the serial monitor and you should see something like the following Connecting to WiFi.. Connecting to WiFi.. Connected to the WiFi network Connecting to MQTT… connected To test this quickly and easily I use MQTTLens in Chrome, you can see in the screen capture below I subscribed to esp32/esp32test and you can also see messages coming through
http://www.esp32learning.com/code/publishing-messages-to-mqtt-topic-using-an-esp32.php
CC-MAIN-2018-51
en
refinedweb
Question: I am writing a action in django.I want to now about the rows which are updated by the action or say id field of the row.I want to make a log of all the actions. I am having a field status which has 3 values :'activate','pending','reject'.I have made action for changing the status to activate.when i perform the action i want to have the log of rows updated so i need some value which can be stored in log such as id coresponding to that row Solution:1 As far as i can understand you want make an admin log-entry for the object you update using your custom action. I actually did something like that, purely as django does it. As its your custom action you can add this piece of code. Edit: Call this function after your action finishes, or rather i should say, after you change the status and save the object. def log_it(request, object, change_message): """ Log this activity """ from django.contrib.admin.models import LogEntry from django.contrib.contenttypes.models import ContentType LogEntry.objects.log_action( user_id = request.user.id, content_type_id = ContentType.objects.get_for_model(object).pk, object_id = object.pk, object_repr = change_message, # Message you want to show in admin action list change_message = change_message, # I used same action_flag = 4 ) # call it after you save your object log_it(request, status_obj, "Status %s activated" % status_obj.pk) You can always get which object you updated by fetching LogEntry object log_entry = LogEntry.objects.filter(action_flag=4)[:1] log_entry[0].get_admin_url() Hope this helps. Solution:2 It is very easy! Just make a loop of your queryset, then you can access each field of that row and store it where you want. for e in queryset: if (e.status != "pending"): flag = False Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-django-admin-action-in-11.html
CC-MAIN-2018-51
en
refinedweb
NOTE: - 22 Feb 2015 10:57:03 GMT - Search in distribution BerkeleyDB::Easy is a convenience wrapper around BerkeleyDB.pm. It will reduce the amount of boilerplate you have to write, with special focus on comprehensive and customizable error handling and logging, with minimal overhead....RSCHABER/BerkeleyDB-Easy-0.06 - 06 Sep 2014 07:11:16 GMT - Search in distribution - BerkeleyDB::Easy::Cursor - Cursor to database handle - BerkeleyDB::Easy::Handle - Generic class for Btree, Hash, Recno, Queue, and Heap handles. - lib/BerkeleyDB/Easy/Error.pm - 1 more result from BerkeleyDB-Easy » BerkeleyDB::Lite is an interface to Paul Marquess's BerkeleyDB that provides simplified constructors, tied access to data, and methods for returning multiple record sets. Example 1 BerkeleyDB::Lite maintains BerkeleyDB environment references in a pac...TQISJIM/BerkeleyDB-Lite-1_1 - 08 Aug 2004 03:21:10 GMT - Search in distribution ...TQISJIM/BerkeleyDB-Locks-0_4 - 10 Jul 2004 16:51:52 GMT - Search in distribution This module implements the Cache interface provided by the Cache::Cache family of modules written by DeWitt Clinton. It provides a practically drop-in replacement for Cache::FileCache. As should be obvious from the name, the backend is based on Berke...BALDUR/Cache-BerkeleyDB-0.03 - 02 Feb 2006 13:26:20 GMT - Search in distribution - Cache::BerkeleyDB_Backend - persistance mechanism based on BerkeleyDB forks::BerkeleyDB is a drop-in replacement for threads, written as an extension of forks. The goal of this module is to improve upon the core performance of forks at a level comparable to native ithreads....RYBSKEJ/forks-BerkeleyDB-0.06 - 17 Feb 2009 09:49:11 GMT - Search in distribution - forks::BerkeleyDB::shared - high-performance drop-in replacement for threads::shared - forks::BerkeleyDB::shared::hash - class for tie-ing hashes to BerkeleyDB Btree - forks::BerkeleyDB::shared::array - class for tie-ing arrays to BerkeleyDB Recno - 2 more results from forks-BerkeleyDB » This object provides a convenience wrapper for BerkeleyDB...NUFFIN/BerkeleyDB-Manager-0.12 - 16 Jan 2009 19:01:45 GMT - Search in distribution This module provides an OO/tie-based wrapper for BerkeleyDB CDS implementations intended for use in tied hashes. NOTE: This module breaks significantly with previous incarnations of this module. The primary differences are as follows: Pros ----------...CORLISS/Paranoid-BerkeleyDB-2.03 - 24 Mar 2017 00:25:09 GMT - Search in distribution - Paranoid::BerkeleyDB::Db - BerkeleyDB Db Wrapper - Paranoid::BerkeleyDB::Env - BerkeleyDB CDS Env Object This subclass of "Thesaurus" implements persistence by using a BerkeleyDB file. This module requires the "BerkeleyDB" module from CPAN....DROLSKY/Thesaurus-0.23 - 31 Mar 2007 15:16:19 GMT - Search in distribution - Thesaurus - Maintains lists of associated items - Thesaurus::CSV - Read/write thesarus data from/to a file This cache driver uses Berkeley DB files to store data. Each namespace is stored in its own db file. By default, the driver configures the Berkeley DB environment to use the Concurrent Data Store (CDS), making it safe for multiple processes to read a...MSCHOUT/CHI-Driver-BerkeleyDB-0.05 - 16 Jun 2018 17:07:12 GMT - Search in distribution This module implements locking for the quota db file using BerkeleyDB....DROLSKY/Apache-Quota-0.04 - 30 Mar 2007 15:34:49 GMT - Search in distribution - Apache::Quota - Flexible transfer limiting/throttling under mod_perl is a subclass of the HTML::Index::Store module, that uses Berkeley DB files to store the inverted index....AWRIGLEY/HTML-Index-0.15 - 30 Jun 2003 15:35:32 GMT - Search in distribution - HTML::Index - Perl modules for creating and searching an index of HTML files - HTML::Index::Store - subclass'able module for storing inverted index files for the HTML::Index modules. ROUZIER/CHI-Driver-BerkeleyDB-Manager-0.01 - 12 Jul 2014 19:36:18 GMT - Search in distribution Please see Email::AutoReply::DB, the interface this class implements....AMONSEN/Email-AutoReply-1.04 - 09 Jun 2008 22:04:13 GMT - Search in distribution Data::Session::Driver::BerkeleyDB allows Data::Session to manipulate sessions via BerkeleyDB. To use this module do both of these: o Specify a driver of type BerkeleyDB, as Data::Session -> new(type => 'driver:BerkeleyDB ...') o Specify a cache objec...RSAVAGE/Data-Session-1.17 - 13 Feb 2016 22:45:07 GMT - Search in distribution - Data::Session - Persistent session data management - Data::Session::CGISession - A persistent session manager This role provides most of what's needed in order to store spam-checking data in a BerkeleyDB file. The only method you must implement in your class is the "$class->_store_value()" method. Typically, this will be a database containing things like bad...DROLSKY/Antispam-Toolkit-0.08 - 16 Jan 2011 19:47:11 GMT - Search in distribution - Antispam::Toolkit - Classes, roles, and types for use by other Antispam modules Adds an accessor for a BerkeleyDB cache in your Catalyst application class....DKAMHOLZ/Catalyst-Plugin-Cache-BerkeleyDB-0.01 - 10 Mar 2006 05:44:41 GMT - Search in distribution DMAKI/Data-Localize-0.00027 - 18 Oct 2014 23:15:39 GMT - Search in distribution - Data::Localize - Alternate Data Localization API GMT - Search in distribution
https://metacpan.org/search?q=BerkeleyDB
CC-MAIN-2018-51
en
refinedweb
Simple list_filter that offers filtering by __isnull. Project description Simple list_filter that offers filtering by __isnull. Documentation The full documentation is at. Quickstart Install Django isNull list_filter: pip install django-isnull-list-filter or use development version: pip install -e git+ Directly use it in your admin: from isnull_filter import isnull_filter class MyAdmin(admin.ModelAdmin): list_filter = ( isnull_filter('author'), # Just set the field isnull_filter('author', _("Hasn't got author")), # Or you can override the default filter title ) Features - Can be used on: - simple field - ForeignKeyField - related ForeignKeyField - ManyToManyField - OneToOneField - Default title can be overriden Running Tests Does the code actually work? source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install tox (myenv) $ tox Credits Author: - Petr Dlouhý Tools used in rendering this package: History 0.1.0 (2017-04-26) - First release on PyPI. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-isnull-list-filter/
CC-MAIN-2018-51
en
refinedweb
Ubuntu.DownloadManager.SingleDownload Manage file downloads and tracking the progress. More... Properties - allowMobileDownload : bool - autoStart : bool - downloadId : string - downloadInProgress : bool - downloading : bool - errorMessage : string - headers : QVariantMap - isCompleted : bool - metadata : Metadata - progress : int - throttle : long Signals Methods Detailed Description SingleDownload provides facilities for downloading a single file, track the process, react to error conditions, etc. Example usage: import QtQuick 2.0 import Ubuntu.Components 1.2 import Ubuntu.DownloadManager 1.2 Rectangle { width: units.gu(100) height: units.gu(20) TextField { id: text placeholderText: "File URL to download..." height: 50 anchors { left: parent.left right: button.left rightMargin: units.gu(2) } } Button { id: button text: "Download" height: 50 anchors.right: parent.right onClicked: { single.download(text.text); } } ProgressBar { minimumValue: 0 maximumValue: 100 value: single.progress anchors { left: parent.left right: parent.right bottom: parent.bottom } SingleDownload { id: single } } } See also DownloadManager. Property Documentation This property sets if the download handled by this object will work under mobile data connection. This property sets if the downloads should start automatically, or let the user decide when to start them calling the "start()" method. This property provides the unique identifier that represents the download within the download manager. This property represents if the download is active, no matter if it's paused or anything. If a download is active, the value will be True. It will become False when the download finished or get canceled. This property represents the current state of the download. False if paused or not downloading anything. True if the file is currently being downloaded. The error message associated with the current download, if there is any. This property allows to get and set the headers that will be used to perform the download request. All headers must be strings or at least QVariant should be able to convert them to strings. The current state of the download. True if the download already finished, False otherwise. This property allows to get and set the metadata that will be linked to the download request. This property reports the current progress in percentage of the download, from 0 to 100. This property can be used to limit the bandwidth used for the download. Signal Documentation This signal is emitted when a download has finished. The downloaded file path is provided via the 'path' paremeter. The corresponding handler is onFinished Method Documentation Cancels a download. Creates the download for the given url and reports the different states through the properties. Pauses the download. An error is returned if the download was already paused. Resumes and already paused download. An error is returned if the download was already resumed or not paused. Starts the download, used when autoStart is False.
https://docs.ubuntu.com/phone/en/apps/api-qml-current/Ubuntu.DownloadManager.SingleDownload
CC-MAIN-2018-51
en
refinedweb
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/4.6 [en-gb]C-CCK-MCD NetscapeOnline.co.uk (Win98; I) Description of problem: Hello there, I just tried to compile package rpm-4.1-1.06 from Redhat 8.0 The compiler said ../db/db_load/db_load.c:810: warning: operation on `instr' may be undefined Here is an untested patch to shut up the compiler. *** ./db/db_load/db_load.c.old 2003-01-18 14:36:07.000000000 +0000 --- ./db/db_load/db_load.c 2003-01-18 14:36:32.000000000 +0000 *************** *** 806,813 **** *outstr++ = '\\'; continue; } ! c = digitize(dbenv, *instr, &e1) << 4 | ! digitize(dbenv, *++instr, &e2); if (e1 || e2) { badend(dbenv); return (EINVAL); --- 806,814 ---- *outstr++ = '\\'; continue; } ! c = digitize(dbenv, instr[ 0], &e1) << 4 | ! digitize(dbenv, instr[ 1], &e2); ! ++instr; if (e1 || e2) { badend(dbenv); return (EINVAL); Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. compile the program 2. 3. Additional info Same code compiles w/o warning using gcc-3.2.1. try compiling with C compiler flags "-g -O2 -Wall". Then you should see the problem. You appear to have not read the source code I pointed out. It is clearly wrong, merely by inspection. I *did* compile with -Wall, and several other checks, but am almost certainly using a different compiler. Yes, the source code looks "fishy", but that's a Berkeley DB problem, not an rpm problem. I choose to stay as close as possible to pristine Berkeley DB. >I *did* compile with -Wall, and several other checks, >but am almost certainly using a different compiler. You may have compiled with -Wall, but you didn't use 3.2.1 as well. Try using 3.2.1 *and* "-g -O2 -Wall" to reproduce my bug report. >Yes, the source code looks "fishy", but that's a >Berkeley DB problem, not an rpm problem. I choose >to stay as close as possible to pristine Berkeley DB. What you describe as "fishy" is undefined code. I strongly recommend reading section 2.12 of K&R 2. Page 52 in my copy. You might stay as close as you like to Berkeley, but it's still undefined code. I've provided a patch. Just what else do I have to do to get you to fix this bug ? Please send patch to sleepycat.
https://bugzilla.redhat.com/show_bug.cgi?id=82191
CC-MAIN-2018-51
en
refinedweb
Interface An interface holds definitions for a group of related functionalities which can be implemented by a struct or class. This polymorphism tool allows for usage like exploiting behavior from multiple sources within a single class. Interfaces prove critical in C#, which does not allow multiple inheritance, or allow structs to inherit. A class can only implement an interface once, but it can inherit one multiple times via base classes. A base class can implement interface members through virtual members; furthermore, it can alter interface behavior through overrides. Properties and indexers can employ extra accessors using properties and indexers defined in an interface. Interfaces can implement other interfaces. Use the interface keyword to define an interface. Review the example below: interface IInterface { bool Equals(T obj); } Review an example of interface use below: using System.Collections.Generic; using System.Linq; using System.Text; using System; namespace InterfaceAPP { public interface IShipping { // members of the interface void showTransfer(); int getQuantity(); } public class Shipping : IShipping { private string sCode; private string date; private int quantity; public Shipping() { sCode = " "; date = " "; quantity = 0; } public Shipping(string c, string d, int q) { sCode = c; date = d; quantity = q; } public int getQuantity() { return quantity; } public void showTransfer() { Console.WriteLine("Transfer: {0}", sCode); Console.WriteLine("Date: {0}", date); Console.WriteLine("Quantity: {0}", getQuantity()); } } class Test { static void Main(string[] args) { Shipping s1 = new Shipping("6544", "6/12/2080", 75900); Shipping s2 = new Shipping("5457", "9/09/2080", 422900); s1.showTransfer(); s2.showTransfer(); Console.ReadKey(); } } } Interfaces can contain properties, methods, indexers, events, and any combination of these four. They cannot contain operators, constants, fields, destructors, types, or instance destructors. Interfaces, by default, are public. They also cannot contain static members. In implementation, members must have a corresponding interface member declared public, non-static, and bearing the same name and signature. In a class or struct, an implementation must exist for all members defined by the interface. Classes and structs do not receive functionality from an interface in a way resembling an inherited base class; however, when a base class implements an interface, all classes inheriting it use that implementation.
https://freeasphosting.net/csharp-tutorial-interface.html
CC-MAIN-2018-51
en
refinedweb
Last week we built our first neural network and used it on a real machine learning problem. We took the Iris data set and built a classifier that took in various flower measurements. It determined, with decent accuracy, the type of flower the measurements referred to. But we’ve still only seen half the story of Tensor Flow! We’ve constructed many tensors and combined them in interesting ways. We can imagine what is going on with the “flow”, but we haven’t seen a visual representation of that yet. We’re in luck though, thanks to the Tensor Board application. With it, we can visualize the computation graph we've created. We can also track certain values throughout our program run. In this article, we’ll take our Iris example and show how we can add Tensor Board features to it. Here's the Github repo with all the code so you can follow along! Add an Event Writer The first thing to understand about Tensor Board is that it gets its data from a source directory. While we’re running our system, we have to direct it to write events to that directory. This will allow Tensor Board to know what happened in our training run. eventsDir :: FilePath eventsDir = "/tmp/tensorflow/iris/logs/" runIris :: FilePath -> FilePath -> IO () runIris trainingFile testingFile = withEventWriter eventsDir $ \eventWriter -> runSession $ do ... By itself, this doesn’t write anything down into that directory though! To understand the consequences of this, let’s boot up tensor board. Running Tensor Board Running our executable again doesn't bring up tensor board. It merely logs the information that Tensor Board uses. To actually see that information, we’ll run the tensorboard command. >> tensorboard --logdir=’/tmp/tensorflow/iris/logs’ Starting TensorBoard 47 at Then we can point our web browser at the correct port. Since we haven't written anything to the file yet, there won’t be much for us to see other than some pretty graphics. So let’s start by logging our graph. This is actually quite easy! Remember our model? We can use the logGraph function combined with our event writer so we can see it. model <- build createModel logGraph eventWriter createModel Now when we refresh Tensor Flow, we’ll see our system’s graph. What the heck is going on here? But, it’s very large and very confusing. The names of all the nodes are a little confusing, and it’s not clear what data is going where. Plus, we have no idea what’s going on with our error rate or anything like that. Let’s make a couple adjustments to fix this. Adding Summaries So the first step is to actually specify some measurements that we’ll have Tensor Board plot for us. One node we can use is a “scalar summary”. This provides us with a summary of a particular value over the course of our training run. Let’s do this with our errorRate node. We can use the simple scalarSummary function. errorRate_ <- render $ 1 - (reduceMean (cast correctPredictions)) scalarSummary "Error" errorRate_ The second type of summary is a histogram summary. We use this on a particular tensor to see the distribution of its values over the course of the run. Let’s do this with our second set of weights. We need to use readValue to go from a Variable to a Tensor. (finalWeights, finalBiases, finalResults) <- buildNNLayer numHiddenUnits irisLabels rectifiedHiddenResults histogramSummary "Weights" (readValue finalWeights) So let’s run tensor flow again. We would expect to see these new values show up under the Scalars and Histograms tabs. But they don’t. This is because we still to write these results to our event writer. And this turns out to be a little complicated. First, before we start training, we have to create a tensor representing all our summaries. logGraph eventWriter createModel summaryTensor <- build mergeAllSummaries Now if we had no placeholders, we could run this tensor whenever we wanted, and it would output the values. But our summary tensors depend on the input placeholders, which complicates the matter. So here’s what we’ll do. We’ll only write out the summaries when we check our error rate (every 100 steps). To do this, we have to change our error rate in the model to take the summary tensor as an extra argument. We’ll also have it add a ByteString as a return value to the original Float. data Model = Model { train :: TensorData Float -> TensorData Int64 -> Session () , errorRate :: TensorData Float -> TensorData Int64 -> SummaryTensor -> Session (Float, ByteString) } Within our model definition, we’ll use this extra parameter. It will run both the errorRate_ tensor AND the summary tensor together with the feeds: return $ Model , train = ... , errorRate = \inputFeed outputFeed summaryTensor -> do (errorTensorResult, summaryTensorResult) <- runWithFeeds [ feed inputs inputFeed , feed outputs outputFeed ] (errorRate_, summaryTensor) return (unScalar errorTensorResult, unScalar summaryTensorResult) Now we need to modify our calls to errorRate below. We’ll pass the summary tensor as an argument, and get the bytes as output. We’ll write it to our event writer (after decoding), and then we’ll be done! --, summaryBytes) <- (errorRate model) trainingInputs trainingOutputs summaryTensor let summary = decodeMessageOrDie summaryBytes liftIO $ putStrLn $ "Current training error " ++ show (err * 100) logSummary eventWriter (fromIntegral i) summary liftIO $ putStrLn "" -- Testing let (testingInputs, testingOutputs) = convertRecordsToTensorData testRecords (testingError, _) <- (errorRate model) testingInputs testingOutputs summaryTensor liftIO $ putStrLn $ "test error " ++ show (testingError * 100) Now we can see what our summaries look like by running tensor board again! Scalar Summary of our Error Rate Histogram summary of our final weights. Annotating our Graph Now let’s look back to our graph. It’s still a bit confusing. We can clean it up a lot by creating “name scopes”. A name scope is part of the graph that we set aside under a single name. When Tensor Board generates our graph, it will create one big block for the scope. We can then zoom in and examine the individual nodes if we want. We’ll make three different scopes. First, we’ll make a scope for each of the hidden layers of our neural network. This is quite easy, since we already have a function for creating these. All we have to do is make the function take an extra parameter for the name of the scope we want. Then we wrap the whole function within the withNameScope function. buildNNLayer :: Int64 -> Int64 -> Tensor v Float -> Text -> Build (Variable Float, Variable Float, Tensor Build Float) buildNNLayer inputSize outputSize input layerName = withNameScope layerName $ do weights <- truncatedNormal (vector [inputSize, outputSize]) >>= initializedVariable bias <- truncatedNormal (vector [outputSize]) >>= initializedVariable let results = (input `matMul` readValue weights) `add` readValue bias return (weights, bias, results) We supply our name further down in the code: (hiddenWeights, hiddenBiases, hiddenResults) <- buildNNLayer irisFeatures numHiddenUnits inputs "layer1" let rectifiedHiddenResults = relu hiddenResults (finalWeights, finalBiases, finalResults) <- buildNNLayer numHiddenUnits irisLabels rectifiedHiddenResults "layer2" Now we’ll add a scope around all our error calculations. First, we combine these into an action wrapped in withNameScope. Then, observing that we need the errorRate_ and train_ steps, we return those from the block. That’s it! (errorRate_, train_) <- withNameScope "error_calculation" $ do actualOutput <- render $ cast $ argMax finalResults (scalar (1 :: Int64)) let correctPredictions = equal actualOutput outputs er <- render $ 1 - (reduceMean (cast correctPredictions)) scalarSummary "Error" er let outputVectors = oneHot outputs (fromIntegral irisLabels) 1 0 let loss = reduceMean $ fst $ softmaxCrossEntropyWithLogits finalResults outputVectors let params = [hiddenWeights, hiddenBiases, finalWeights, finalBiases] tr <- minimizeWith adam loss params return (er, tr) Now when we look at our graph, we see that it’s divided into three parts: our two layers, and our error calculation. All the information flows among these three parts (as well as the "Adam" optimizer portion). Much Better Conclusion By default, Tensor Board graphs can look a little messy. But by adding a little more information to the nodes and using scopes, you can paint a much clearer picture. You can see how the data flows from one end of the application to the other. We can also use summaries to track important information about our graph. We’ll use this most often for the loss function or error rate. Hopefully, we'll see it decline over time. Next week we’ll add some more complexity to our neural networks. We'll see new tensors for convolution and max pooling. This will allow us to solve the more difficult MNIST digit recognition problem. Stay tuned! If you’re itching to try out some Tensor Board functionality for yourself, check out our in-depth Tensor Flow guide. It goes into more detail about the practical aspects of using this library. If you want to get the Haskell Tensor Flow library running on your local machine, check it out! Trust me, it's a little complicated, unless you're a Stack wizard already! And if this is your first exposure to Haskell, try it out! Take a look at our guide to getting started with the language!
https://mmhaskell.com/blog/2017/8/28/putting-the-flow-in-tensor-flow
CC-MAIN-2018-51
en
refinedweb
- 12 Oct, 2019 1 commit - 09 Oct, 2019 6 commits. This allows the stage1 compiler (which needs to run on the build platform and produce code for the host) to depend upon properties of the target. This is wrong. However, it's no more wrong than it was previously and @Erichson2314 is working on fixing this so I'm going to remove the guard so we can finally bootstrap HEAD with ghc-8.8 (see issue #17146). To avoid polluting the macro namespace - 08 Oct, 2019 26 commits - Sebastian Graf authored 7. - Andrey Mokhov authored
https://gitlab.haskell.org/nineonine/ghc/-/commits/f1e5b134f06299ba9af223656cfcf0c5993a9563
CC-MAIN-2020-40
en
refinedweb
指定した開始位置と終了位置の間にラインを描画します。 The line will be drawn in the Game view of the editor when the game is running and the gizmo drawing is enabled. The line will also be drawn in the Scene when it is visible in the Game view. Leave the game running and showing the line. Switch to the Scene view and the line will be visible. The duration is the time (in seconds) for which the line will be visible after it is first displayed. A duration of zero shows the line for just one frame. Note: This is for debugging playmode only. Editor gizmos should be drawn with Gizmos.Drawline or Handles.DrawLine instead. using UnityEngine; public class ExampleScript : MonoBehaviour { void Start() { // draw a 5-unit white line from the origin for 2.5 seconds Debug.DrawLine(Vector3.zero, new Vector3(5, 0, 0), Color.white, 2.5f); } private float q = 0.0f; void FixedUpdate() { // always draw a 5-unit colored line from the origin Color color = new Color(q, q, 1.0f); Debug.DrawLine(Vector3.zero, new Vector3(0, 5, 0), color); q = q + 0.01f; if (q > 1.0f) { q = 0.0f; } } } using UnityEngine; public class Example : MonoBehaviour { // Event callback example: Debug-draw all contact points and normals for 2 seconds. void OnCollisionEnter(Collision collision) { foreach (ContactPoint contact in collision.contacts) { Debug.DrawLine(contact.point, contact.point + contact.normal, Color.green, 2, false); } } }
https://docs.unity3d.com/ja/2020.1/ScriptReference/Debug.DrawLine.html
CC-MAIN-2020-40
en
refinedweb
%matplotlib inline import random import numpy as np import matplotlib.pyplot as plt from math import sqrt, pi import scipy import scipy.stats plt.style.use('seaborn-whitegrid') Now we'll consider 2D numeric data. Recall that we're taking two measurements simultaneously so that their should be an equal number of data points for each dimension. Furthermore, they should be paired. For example, measuring people's weight and height is valid. Measuring one group of people's height and then a different group of people's weight is not valid. Our example for this lecture will be one of the most famous datasets of all time: the Iris dataset. It's a commonly used dataset in education and describes measurements in centimeters of 150 Iris flowers. The measured data are the columns and each row is an iris flower. They are sepal length, sepal width, pedal length, pedal width, and species. We'll ignore species for our example. import pydataset data = pydataset.data('iris').values #remove species column data = data[:,:4].astype(float) np.cov(data[:,1], data[:,3], ddof=1) array([[ 0.18997942, -0.12163937], [-0.12163937, 0.58100626]]) This is called a covariance matrix:$$\left[\begin{array}{lr} \sigma_{xx} & \sigma_{xy}\\ \sigma_{yx} & \sigma_{yy}\\ \end{array}\right]$$ The diagonals are the sample variances and the off-diagonal elements are the sample covariances. It is symmetric, since $\sigma_{xy} = \sigma_{yx}$. The value we observed for sample covariance is negative covariance the measurements. That means as one increases, the other decreases. The ddof was set to 1, meaning that the divosor for sample covariance is $N - 1$. Remember that $N$ is the number of pairs of $x$ and $y$ values. The covariance matrix can be any size. So we can explore all possible covariances simultaneously. #add rowvar = False to indicate we want cov #over our columns and not rows np.cov(data, rowvar=False, ddof=1) array([[ 0.68569351, -0.042434 , 1.27431544, 0.51627069], [-0.042434 , 0.18997942, -0.32965638, -0.12163937], [ 1.27431544, -0.32965638, 3.11627785, 1.2956094 ], [ 0.51627069, -0.12163937, 1.2956094 , 0.58100626]]) To read this larger matrix, recall the column descriptions: sepal length (0), sepal width (1), pedal length (2), pedal width (3). Then use the row and column index to identify which sample covariance is being computed. The row and column indices are interchangable because it is symmetric. For example, the sample covariance of sepal length with sepal width is $-0.042$ centimeters. plt.plot(data[:,0], data[:,2]) plt.show() What happened? It turns out, our data are not sorted according to sepal length, so the lines go from value to value. There is no reason that our data should be ordered by sepal length, so we need to use dot markers to get rid of the lines. plt.title('Sample Covariance: 1.27 cm') plt.plot(data[:,0], data[:,2], 'o') plt.xlabel('Sepal Length [cm]') plt.ylabel('Pedal Length [cm]') plt.show() Now the other plot plt.title('Sample Covariance: 0.52 cm') plt.plot(data[:,0], data[:,3], 'o') plt.xlabel('Sepal Length [cm]') plt.ylabel('Pedal Width [cm]') plt.show() That is suprising! The "low" sample variance plot looks like it has as much correlation as the "high" sample covariance. That's because sample variance measures both the underlying variance of both dimensions and their correlation. The reason this is a low sample covariance is that the y-values change less than in the first plot. Since the covariance includes the correlation between variables and the variance of the two variables, sample correlation tries to remove the variacne so we can view only correlation.$$r_{xy} = \frac{\sigma_{xy}}{\sigma_x \sigma_y}$$ Similar to the covariance, there is something called the correlation matrix or the normalized covariance matrix. np.corrcoef(data, rowvar=False) array([[ 1. , -0.11756978, 0.87175378, 0.81794113], [-0.11756978, 1. , -0.4284401 , -0.36612593], [ 0.87175378, -0.4284401 , 1. , 0.96286543], [ 0.81794113, -0.36612593, 0.96286543, 1. ]]) Note that we don't have to pass in ddof because it cancels in the correlation coefficient expression. Now we also see that the two plots from above have similar correlations as we saw visually. Let's try creating some synthetic data to observe properties of correlation. I'm using the rvs function to sample data from distributions using scipy.stats. x = scipy.stats.norm.rvs(size=15, scale=4) y = scipy.stats.norm.rvs(size=15, scale=4) cor = np.corrcoef(x,y)[0,1] plt.title('r = {}'.format(cor)) plt.plot(x, y, 'o') plt.xlabel('x') plt.ylabel('$y$') plt.show() x = scipy.stats.norm.rvs(size=100, scale=4) y = x ** 2 cor = np.corrcoef(x,y)[0,1] plt.title('r = {}'.format(cor)) plt.plot(x, y, 'o') plt.xlabel('x') plt.ylabel('$x^2$') plt.show() See that $x^2$ is an analytic function of $x$, but it has a lower correlation than two independent random numbers. That's because correlation coefficients are unreliable for non-linear behavior. Another example:
https://nbviewer.jupyter.org/github/whitead/numerical_stats/blob/master/unit_7/lectures/lecture_4.ipynb
CC-MAIN-2020-40
en
refinedweb
We will see how to install Jupyter on different environments. We will install it on Windows, the Mac, Linux, and a server machine. Some consideration should be given to multiple user access when installing on a server. If you are going to install it on a non-Windows environment, please review the Anaconda installation on Windows first as the same installation steps for Anaconda are available on other environments. The Windows environment suffers from a drawback: none of the standard Linux tools are available out of the box. This is a problem as Jupyter and many other programs were developed on a version of Unix and expect many developer tools normally used in Unix to be available. Luckily, there is a company that has seen this problem and addressed itâAnaconda. Anaconda describes itself as a Python Data Science Platform, but its platform allows for a variety of solutions in data science that are not based on Python. After installing Anaconda and starting Navigator, you get to a dashboard that presents the programs available, such as: The Anaconda Navigator provides access to each of the programs you have installed (using Anaconda). Each of the programs can be started from Navigator (by clicking on the associated Launch button) and you can also start them individually (as they are standalone applications). As you install programs with Anaconda, additional menu items become available under the Anaconda menu tree for each of the applications to run directly. The menu item has coding to start the individual applications as needed. As you can see in the preceding screen, the Home display shows the applications available. There are additional menu choices for: Environments: This menu displays all the Python packages that have been installed. I don't think R packages are displayed, nor are other tools or packages included in this display. Projects (beta): This menu is usually empty. I have been using/upgrading Anaconda for a while and have not seen anything displayed here. Learning: This is a very useful feature, where a number of tutorials, videos, and write-ups have been included for the different applications that you (may) have installed. Community: This lists some community groups for the different products you have installed. The preceding screen shows Jupyter as an installed program. The standard install of Anaconda does include Jupyter. If you choose not to use Anaconda, you can install Jupyter directly. Jupyter, as a project, grew out of Python, so it is somewhat dependent on which version of Python you have installed. For Python 2 installations, the command line steps to install Jupyter are: python -m pip install --upgrade pip python -m pip install jupyter This assumes you have pip installed. The pip system is a package management system written in Python. To install pip on your Windows machine, execute the following line: python get-pip.py As you can see, this is all Python (this code calls Python to execute a standard Python script). Anaconda provides the tools to install a number of programs, including Jupyter. Once you have installed Anaconda, Jupyter will be available to you already. The only issue I found was that the engine installed was Python 2 instead of Python 3. There is a process that Anaconda uses to decide which version of Python to run on your machine. In my case, I started out with Python 2. To upgrade to Python 3, I used these commands: conda create -n py3k python=3 anaconda source activate py3k ipython kernelspec install-self After this, when you start Jupyter, you will have the Python 3 engine choice. You may prefer to have the Python 2 engine also available. This might be if you want to use scripts that were written using Python 2. The commands to add Python 2 back in as an engine choice are: python2 -m pip install ipykernel python2 -m ipykernel install --user You should now see Python 2 and Python 3 as engine choices when you start Jupyter: Apple Macintosh provides a graphical interface that runs on the OS/X operating system. OS/X includes running BSD under the hood. BSD is a version of Unix originally developed at Berkeley. As a version of Unix, it has all the standard developer tools expected by Jupyter to install and upgrade the software; they are built-in tools. Jupyter can be installed on the Mac using Anaconda (as before for Windows) or via the command line. In this section, we will go through the steps for installing Jupyter on Mac. Just as with Windows earlier, we download the latest version of Anaconda and run the installation program. One of the screens should look like this: The Anaconda install is very typical for Mac installs: users can run the program and make sure they want to allocate so much storage for the application to install. Once installed, Jupyter (and Anaconda Navigator) is available just like any other application on the system. You can run Jupyter directly, or you can launch Jupyter from the Anaconda Navigator display. Many Mac users will prefer using the command line to install Jupyter. Using the command line, you can decide whether to install Jupyter with the Python 2 or Python 3 engine. If you want to add the Python 2 engine as a choice in Jupyter, you can follow similar steps for doing so in the earlier Windows command line installation section. The script to install Jupyter on Mac via the command line with the Python 3 engine is: bash ~/Downloads/Anaconda3-5.0.0-MacOSX-x86_64.sh Similarly, the command to install with the Python 2 engine is as follows: bash ~/Downloads/Anaconda2-5.0.0-MacOSX) At this point, you should be able to start Jupyter with the command line and see the appropriate engine choice available: jupyter notebook Linux is one of the easier installations for Jupyter. Linux has all the tools required to update Jupyter going forward. For Linux, we use similar commands to those shown earlier to install on the Mac from the command line. Linux is a very common platform for most programming tasks. Many of the tools used in programming have been developed on Linux and later ported to other operating systems, such as Windows. The script to install Jupyter on Linux via the command line with the Python 3 engine is: bash ~/Downloads/Anaconda3-5.0.0.1-Linux-x86_64.sh Similarly, the command to install with the Python 2 engine is: bash ~/Downloads/Anaconda2-5.0.0.1-Linux) And, as shown earlier under specify Windows and Mac installation sections, you can have the Python 2 and Python 3 engines available using similar steps. At this point, you should be able to start Anaconda Navigator with the command line: anaconda-navigator Or you can run Jupyter directly using the regular command line: jupyter notebook The term server has changed over time to mean several things. We are interested in a machine that will have multiple users accessing the same software concurrently. Jupyter Notebooks can be run by multiple users. However, there is no facility to separate the data for one user from another. Standard Jupyter installations only expect and account for one user. If we have a Notebook that allows for data input from the user, then the data from different users will be intermingled in one instance and possibly displayed incorrectly. See the following example in this section. We can see an example of a collision with a Notebook that allows for data entry from a user and responds with incorrect results: - I call upon an example that I have used elsewhere for illustration. For this example, we will use a simple Notebook that asks the user for some information and changes the display to use that information: from ipywidgets import interact def myfunction(x): return x interact(myfunction, x= "Hello World "); - The script presents a textbox to the user, with the original value of the box containing the Hello Worldstring. - As the user interacts with the input field and changes the value, the value of the variable xin the script changes accordingly and is displayed on screen. For example, I have changed the value to the letter A: - We can see the multiuser problem if we open the same page in another browser window (copy the URL, open a new browser window, paste in the URL, and hit Enter). We get the exact same displayâwhich is incorrect. We expected the new window to start with a new script, just prompting us with the default Hello Worldmessage. However, since the Jupyter software expects only one user, there is only one copy of the variable x; thus, it displays its value A. We can have a Notebook server that expects multiple users and separates their instances from each other without the annoying collisions occurring. A Notebook server includes the standard Jupyter Notebook application that we have seen, but a server can also include software to distinguish the data of one user from another. We'll cover several examples of this solution in Chapter 8, Multiuser Environments.
https://www.packtpub.com/product/jupyter-cookbook/9781788839440
CC-MAIN-2020-40
en
refinedweb
Iḿ trying to install plotly & set up streaming, but i can seem to get past step 1 (installing plotly). Every reference to plotly i try in python gives the same response: import plotly Traceback (most recent call last): File “”, line 1, in File “plotly.py”, line 1, in import plotly.plotly as py # plotly library ImportError: No module named plotly plotly.version Traceback (most recent call last): File “”, line 1, in NameError: name ‘plotly’ is not defined I´ve read through this forum & updated pip & python, as well as put the correct username & API key in the credentials file, all to no avail. Anyone got any ideas for a beginner?
https://community.plotly.com/t/installing-plotly/2841
CC-MAIN-2020-40
en
refinedweb
I have a timestamp of form [10/15/11 11:55:08:992 PDT] . . . log entry text . . . I expect I can try the following specifier in props.conf file for the above Oct 10th 2011 date format: TIMEPREFIX = ^. MAXTIMESTAMPLOOKAHEAD = 22 TIMEFORMAT = %y/%d/%m %k:%M:%S But for dates where the day of the month of log entry is less than 10 I hve something like: [12/8/11 11:55:08:992 PDT] . . . log entry text . . . My understanding is %d works for a two digit day format, but I don't see a good option when day can be two digits or a single non-padded digit day of month representation. Suggestions? I believe unfortunately that the "%e" opption still winds up with two characters. Though lot of python tutorials do not mention it, when the day number is less than 10 "%e" seems to front pad with a blank, where "%d" frontpads with a zero. As is born out by the folowing ksh and python script content and output. #---------------------- #!/bin/ksh # kshdatewithdand_e # If current day of the month is greater than 9 then print date time out # for the 9th of the month. Otherwise print out current date time # DAY= date +%e if [ $DAY -gt 9 ] then let BACK=$DAY-9 else BACK=0 fi date -d "$BACK days ago" +"%y/%d/%m %k:%M:%S" date -d "$BACK days ago" +"%y/%e/%m %k:%M:%S" # END SAMPLE OUTPUT: 11/09/12 10:50:15 11/ 9/12 10:50:15 #---------------------- #!/usr/bin/python # pythondatewithdand_e" # Using hard coded date here # import time t = (2011, 12, 9, 17, 3, 38, 1, 48, 0) t = time.mktime(t) print time.strftime("%y/%d/%m %k:%M:%S", time.gmtime(t)) print time.strftime("%y/%e/%m %k:%M:%S", time.gmtime(t)) # END SAMPLE OUTPUT: 11/09/12 23:03:38 11/ 9/12 23:03:38 #---------------------- Unless splunk does something special for "%e" different than python or ksh, it seems this would still not match for a single character day in date field I have not had a chance to experiment further so is still conjecture on my part. Yes - it does not do what it is supposed to do. I want to extract the day from "Aug 18 17:11:16" and "Aug 8 17:11:16". %e is not white space padded. Hi, not that I've tried it, but %e might work for you. According to %d - day of the month (01 to 31) %e - day of the month (1 to 31) Hope this helps, Kristian
https://community.splunk.com/t5/Getting-Data-In/How-do-I-configure-timestamp-extraction-where-day-may-be-one-or/td-p/32505
CC-MAIN-2020-40
en
refinedweb
Install Seldon-Core¶ Pre-requisites:¶ Kubernetes cluster version equal or higher than 1.12 For Openshift it requires version 4.2 or higher Installer method Helm version equal or higher than 3.0 Kustomize version equal or higher than 0.1.0 Ingress Istio ( sample installation using Istio 1.5 can be found at ) Ambassador Running older versions of Seldon Core?¶ Make sure you read the “Upgrading Seldon Core Guide” Seldon Core will stop supporting versions prior to 1.0 so make sure you upgrade. If you are running an older version of Seldon Core, and will be upgading it please make sure you read the Upgrading Seldon Core docs to understand breaking changes and best practices for upgrading. Please see Migrating from Helm v2 to Helm v3 if you are already running Seldon Core using Helm v2 and wish to upgrade. Install Seldon Core with Helm¶ First install Helm 3.x. When helm is installed you can deploy the seldon controller to manage your Seldon Deployment graphs. If you want to provide advanced parameters with your installation you can check the full Seldon Core Helm Chart Reference. The namespace seldon-system is preferred, so we can create it: kubectl create namespace seldon-system Now we can install Seldon Core in the seldon-system namespace. helm install seldon-core seldon-core-operator \ --repo \ --set usageMetrics.enabled=true \ --namespace seldon-system Make sure you install it with the relevant ingress ( ambassador.enabled or istio.enabled) so you are able to send requests (instructions below). In order to install a specific version you can do so by running the same command above with the --version flag, followed by the version you want to run. Whenever a new PR was merged to master, we have set up our CI to build a “SNAPSHOT” version, which would contain the Docker images for that specific development / master-branch code. Whilst the images are pushed under SNAPSHOT, they also create a new “dated” SNAPSHOT version entry, which pushes images with the tag "<next-version>-SNAPSHOT_<timestamp>". A new branch is also created with the name "v<next-version>-SNAPSHOT_<timestamp>", which contains the respective helm charts, and allows for the specific version (as outlined by the version in version.txt) to be installed. This means that you can try out a dev version of master if you want to try a specific feature before it’s released. For this you would be able to clone the repository, and then checkout the relevant SNAPSHOT branch. Once you have done that you can install seldon-core using the following command: helm install helm-charts/seldon-core-operator seldon-core-operator In this case helm-charts/seldon-core-operator is the folder within the repository that contains the charts. You can follow the cert manager documentation to install it. You can then install seldon-core with: helm install seldon-core seldon-core-operator \ --repo \ --set usageMetrics.enabled=true \ --namespace seldon-system \ --set certManager.enabled=true Ingress Support¶ For particular ingresses that we support, you can inform the controller it should activate processing for them. Ambassador add --set ambassador.enabled=true: The controller will add annotations to services it creates so Ambassador can pick them up and wire an endpoint for your deployments. For full instructions on installation with Ambassador read the Ingress with Ambassador page. Istio Gateway add --set istio.enabled=true: The controller will create virtual services and destination rules to wire up endpoints in your istio ingress gateway. For full instructions on installation with Istio read the Ingress with Istio page. Seldon Core Kustomize Install¶ The Kustomize installation can be found in the /operator/config folder of the repo. You should copy this template to your own kustomize location for editing. To use the template directly, there is a Makefile which has a set of useful commands: For kubernetes clusters of version higher than 1.15, make sure you comment the patch_object_selector here. Install cert-manager make install-cert-manager Install Seldon using cert-manager to provide certificates. make deploy Install Seldon with provided certificates in config/cert/ make deploy-cert Other Options¶ Now that you have Seldon Core installed, you can set it up with: Install with Kubeflow¶ GCP MarketPlace¶ If you have a Google Cloud Platform account you can install via the GCP Marketplace. OperatorHub¶ You can install Seldon Core from Operator Hub. Upgrading from Previous Versions¶ See our upgrading notes Advanced Usage¶ You will need a k8s cluster >= 1.15 Helm¶ You can install the Seldon Core Operator so it only manages resources in its namespace. An example to install in a namespace seldon-ns1 is shown below: kubectl create namespace seldon-ns1 kubectl label namespace seldon-ns1 seldon.io/controller-id=seldon-ns1 We label the namespace with seldon.io/controller-id=<namespace> to ensure if there is a clusterwide Seldon Core Operator that it should ignore resources for this namespace. Install the Operator into the namespace: helm install seldon-namespaced seldon-core-operator --repo \ --set singleNamespace=true \ --set image.pullPolicy=IfNotPresent \ --set usageMetrics.enabled=false \ --set crd.create=true \ --namespace seldon-ns1 We set crd.create=true to create the CRD. If you are installing a Seldon Core Operator after you have installed a previous Seldon Core Operator on the same cluster you will need to set crd.create=false. Kustomize¶ An example install is provided in the Makefile in the Operator folder: make deploy-namespaced1 See the multiple server example notebook. Label focused Seldon Core Operator (version >=1.0)¶ You will need a k8s cluster >= 1.15 You can install the Seldon Core Operator so it manages only SeldonDeployments with the label seldon.io/controller-id where the value of the label matches the controller-id of the running operator. An example for a namespace seldon-id1 is shown below: Helm¶ kubectl create namespace seldon-id1 To install the Operator run: helm install seldon-controllerid seldon-core-operator --repo \ --set singleNamespace=false \ --set image.pullPolicy=IfNotPresent \ --set usageMetrics.enabled=false \ --set crd.create=true \ --set controllerId=seldon-id1 \ --namespace seldon-id1 We set crd.create=true to create the CRD. If you are installing a Seldon Core Operator after you have installed a previous Seldon Core Operator on the same cluster you will need to set crd.create=false. For kustomize you will need to uncomment the patch_object_selector here Kustomize¶ An example install is provided in the Makefile in the Operator folder: make deploy-controllerid See the multiple server example notebook.
https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html
CC-MAIN-2020-40
en
refinedweb
Understanding IPFS in Depth(4/6): What is MultiFormats? Every Choice in Computing has a Tradeoff. It’s Time to Make Future-Proof Systems. Receive curated Web 3.0 content like this with a summary every day via WhatsApp, Telegram, Discord, or Email. A Complete Guide Including IPLD, Libp2p, MultiFormats & Filecoin hackernoon.com In part 3, we discussed the Significance of IPNS(InterPlanetary Naming System), How it Works and its technical specification. We also went through a tutorial in which we created and hosted a website totally using IPFS Stack. You can check it out here: Understanding IPFS in Depth(3/6): What is InterPlanetary Naming System(IPNS)? Why do need IPNS, How to Use it and it’s Comparison with DNS hackernoon.com In this part, we are going to dive deep into Multiformats. We will explore: - Why do we Need Multiformats? - Ok, I think we need it. But What is it? - This seems great. Tell me how to use it? If you like high-tech Web3 concepts like Multiformats explained in simple words with interactive tutorials, then head here. Home - SimpleAsWater SimpleAsWater, a Community Platform to Learn, Build, Collaborate & Discover Dapps and Web 3.0 Stories. simpleaswater.com I hope you learn a lot about IPFS from this series. Let’s get started! Every choice in computing has a tradeoff. This includes formats, algorithms, encodings, and so on. And even with a great deal of planning, decisions may lead to breaking changes down the road, or to solutions which are no longer optimal. Allowing systems to evolve and grow, without introducing breaking changes is important. But, Why do we Need Multiformats? To understand the need for Multiformats, let’s take an example of git protocol. A lot of people use it every single day to use services like Github, Gitlab, Bitbucket, etc. We know that git uses hashes for a lot of things. Right now git uses SHA-1 as its hashing algorithm. These hashing algorithms play a very important role. They keep things secure. Not just in git, but in healthcare, global financial systems, and governments too. The way they work is that they are one-way functions. So, you can get an output(Hash) from an input(something) using the hashing function, but it’s practically impossible to get the input from the output. But with time, as more powerful computers are being developed, some of these hash functions have started failing; meaning now you can get the input from the output, hence breaking the security of the systems that use the function. This is what happened to the MD5 hash function. So, just like MD5, someday SHA1 will be broken…and then we would need to use a better hashing function. But the problem here is that, as these algorithms are hard-wired to the ecosystem, it’s really hard to make such changes. Plus what happens to all the codebase that was using the old SHA1? All of that will be rendered incompatible…that sucks! And this problem of non-future-proofing and incompatibility is not just limited to the hashing algorithms. The network protocols are also a prime host of these problems. Take the example of HTTP/2, which was introduced in 2015. From a network’s viewpoint, HTTP/2 made a few notable changes. As it’s a binary protocol, so any device that assumes it’s HTTP/1.1 is going to break. And that meant changing all the things that use HTTP/1.1, that includes browsers and your web servers. So, summing up there are a number of problems that we face: - Introducing breaking changes to update systems with better security. - Introducing breaking changes due to some unforeseen issues. - And sometimes we need to make trade-offs when it comes to multiple numbers of options, each having a desired trait, but you can have only one. Ever been to a candy store, where you wanted to buy the whole store…but your Mom let you buy only one…Yeah, it’s same with tech too. These problems not only break things but also make the whole development cycle slow, as it takes a lot of time to carefully shift the whole system. Almost every system that we see today was NEVER designed with keeping the fact in mind that someday it is going to be outdated. This is not the way we want our future to be. So, we need to embrace the fact that things change. The Multiformats Project introduces a set of standards/protocols that embrace this fact and allows multiple protocols to co-exist so that even if there is a breaking change, the ecosystem still supports all the versions of the protocol. Now, as we know we know “Why”, let’s see “What”… What are Multiformats? The Multiformats Project is a collection of protocols which aim to future-proof systems, today. They do this mainly by enhancing format values with self-description. This allows interoperability, protocol agility, and helps us avoid lock-in. The self-describing aspects of the protocols have a few stipulations: - They MUST be in-band (with the value); not out-of-band (in context). - They MUST avoid lock-in and promote extensibility. - They MUST be compact and have a binary-packed representation. - They MUST have a human-readable representation. Multiformat protocols Currently, we have the following multiformat protocols: - Multihash: Self-describing hashes - Multiaddr: Self-describing network addresses - Multibase: Self-describing base encodings - Multicodec: Self-describing serialization - Multistream: Self-describing stream network protocols - Multistream-select: Friendly protocol multiplexing. - Multigram(WIP): Self-describing packet network protocols - Multikey: cryptographic keys and artifacts Each of the projects has its list of implementations in various languages. Ok. This all sounds a bit complex. Let’s Break it down a bit We will go through each of these multiformat protocols and try to understand how they work. Multihash Multihash is a protocol for differentiating outputs from various well-established hash functions, addressing size + encoding considerations. It is useful to write applications that future-proof their use of hashes and allow multiple hash functions to coexist. Safer, easier cryptographic hash function upgrades, like all tools which make those assumptions would have to be upgraded to use the new hash function and new hash digest length. Tools may face serious interoperability problems or error-prone special casing. As we discussed earlier, there can be a number of problems in the git example: - How many programs out there assume a git hash is a sha1 hash? - How many scripts assume the hash value digest is exactly 160 bits? - How many tools will break when these values change? - How many programs will fail silently when these values change? This is precisely where Multihash shines. It was designed for upgrading.. The MultiHash Format A multihash follows the TLV (type-length-value) pattern. - the type <hash-func-type>is an unsigned variable integer identifying the hash function. There is a default table, and it is configurable. The default table is the multicodec table. - the length <digest-length>is an unsigned variable integer counting the length of the digest, in bytes - the value <digest-value>is the hash function digest, with a length of exactly <digest-length>bytes. To understand the significance of multihash format, let’s use some visual aid. Consider these 4 different hashes of the same input Same length: 256 bits Different hash functions Idea: self-describe the values to distinguish Multihash: fn code + length prefix Multihash: a pretty good multiformat I hope this sums up Multihash pretty well. Implementations You can find a number of multihash implementations in multiple languages. Tutorial You can find a hands-on tutorial on multihash at the end of this post. Multiaddr Multiaddr is a format for encoding addresses from various well-established network protocols. It is useful to write applications that future-proof their use of addresses and allow multiple transport protocols and addresses to coexist. Network Protocol Ossification The current network addressing scheme in the internet IS NOT self-describing. Addresses of the following forms leave much to interpretation and side-band context. The assumptions they make cause applications to also make those assumptions, which causes lots of “this type of address”-specific code. The network addresses and their protocols rust into place, and cannot be displaced by future protocols because the addressing prevents change. For example, consider: 127.0.0.1:9090 # ip4. is this TCP? or UDP? or something else? [::1]:3217 # ip6. is this TCP? or UDP? or something else? //foo.com:1234 # use DNS, to resolve to either ip4 or ip6, but definitely use # tcp after. or maybe quic... >.< # these default to TCP port :80. Instead, when addresses are fully qualified, we can build applications that will work with network protocols of the future, and do not accidentally ossify the stack. /ip4/127.0.0.1/udp/9090/quic /ip6/::1/tcp/3217 /ip4/127.0.0.1/tcp/80/http/baz.jpg /dns4/foo.com/tcp/80/http/bar/baz.jpg /dns6/foo.com/tcp/443/https Multiaddr Format A multiaddr value is a recursive (TLV)+ (type-length-value repeating) encoding. It has two forms: - a human-readable version to be used when printing to the user (UTF-8) - a binary-packed version to be used in storage, transmissions on the wire, and as a primitive in other formats. The human-readable version - path notation nests protocols and addresses, for example: /ip4/127.0.0.1/udp/4023/quic(this is the repeating part). - a protocol MAY be only a code, or also have an address value (nested under a /) (eg. /quicand /ip4/127.0.0.1) - the type <addr-protocol-str-code>is a string code identifying the network protocol. The table of protocols is configurable. The default table is the multicodec table. - the value <addr-value>is the network address value, in natural string form. The binary-packed version - the type <addr-protocol-code>is a variable integer identifying the network protocol. The table of protocols is configurable. The default table is the multicodec table. - the length is an unsigned variable integer counting the length of the address value, in bytes. - The length is omitted by protocols who have an exact address value size, or no address value. - the value <addr-value>is the network address value, of length L. - The value is omitted by protocols who have no address value. Implementations You can find a number of multiaddr implementations in multiple languages. Tutorial You can find a hands-on tutorial on multiaddr at the end of this post. Multibase Multibase is a protocol for disambiguating the encoding of base-encoded (e.g., base32, base64, base58, etc.) binary appearing in the text. When text is encoded as bytes, we can usually use a one-size-fits-all encoding (UTF-8) because we’re always encoding to the same set of 256 bytes (+/- the NUL byte). When that doesn’t work, usually for historical or performance reasons, we can usually infer the encoding from the context. However, when bytes are encoded as text (using a base encoding), the base. Unfortunately, it’s not always clear what base encoding is used; that’s where multibase comes in. It answers the question: Given data d encoded into text s, what base is it encoded with? Multibase Format The Format is: <base-encoding-character><base-encoded-data> Where <base-encoding-character> is used according to the multibase table. Here is an example to show how it works. Consider the following encodings of the same binary string: 4D756C74696261736520697320617765736F6D6521205C6F2F # base16 (hex) JV2WY5DJMJQXGZJANFZSAYLXMVZW63LFEEQFY3ZP # base32 YAjKoNbau5KiqmHPmSxYCvn66dA1vLmwbt # base58 TXVsdGliYXNlIGlzIGF3ZXNvbWUhIFxvLw== # base64 And consider the same encodings with their multibase prefix F4D756C74696261736520697320617765736F6D6521205C6F2F # base16 F BJV2WY5DJMJQXGZJANFZSAYLXMVZW63LFEEQFY3ZP # base32 B zYAjKoNbau5KiqmHPmSxYCvn66dA1vLmwbt # base58 z MTXVsdGliYXNlIGlzIGF3ZXNvbWUhIFxvLw== # base64 M The base prefixes used are: F, B, z, M. Now, you can write self-descriptive encoded text :) Implementations You can find a number of multibase implementations in multiple languages. Now, the next two, Multicodec and Multistream are a bit inter-related, so I will try to explain the motivation behind these two, but you may need to read them both to understand each one of them. Tutorial You can find a hands-on tutorial on multibase at the end of this post. Multicodec Motivation Multistreams are self-describing protocol/encoding streams. Multicodec uses an agreed-upon “protocol table”. It is designed for use in short strings, such as keys or identifiers (i.e CID). How does the protocol work? multicodec is a self-describing multiformat, it wraps other formats with a tiny bit of self-description. A multicodec identifier is a varint. A chunk of data identified by multicodec will look like this: <multicodec><encoded-data> # To reduce the cognitive load, we sometimes might write the same line as: <mc><data> Another useful scenario is when using the multicodec as part of the keys to access data, for example: # suppose we have a value and a key to retrieve it "<key>" -> <value># we can use multicodec with the key to know what codec the value is in "<mc><key>" -> <value> It is worth noting that multicodec works very well in conjunction with multihash and multiaddr, as you can prefix those values with a multicodec to tell what they are. MulticodecProtocol Tables Multicodec uses “protocol tables” to agree upon the mapping from one multicodec code. These tables can be application specific, though — like with other multiformats — we will keep a globally agreed upon table with common protocols and formats. Multicodec Path, also known as multistream Multicodec defines a table for the most common data serialization formats that can be expanded overtime or per application bases, however, in order for two programs to talk with each other, they need to know beforehand which table or table extension is being used. In order to enable self-descriptive data formats or streams that can be dynamically described, without the formal set of adding a binary packed code to a table, we have multistream, so that applications can adopt multiple data formats for their streams and with that create different protocols. Now let’s answer a few questions to understand its significance. Why Multicodec? Because multistream is too long for identifiers. We needed something shorter. Why varints? So that we have no limitation on protocols. Don’t we have to agree on a table of protocols? Yes, but we already have to agree on what protocols themselves are, so this is not so hard. The table even leaves some room for custom protocol paths, or you can use your own tables. The standard table is only for common things. Where did multibase go? For a period of time, the multibase prefixes lived in this table. However, multibase prefixes are symbols that may map to multiple underlying byte representations (that may overlap with byte sequences used for other multicodecs). Including them in a table for binary/byte identifiers lead to more confusion than it solved. You can still find the table in multibase.csv. Implementations You can find a number of multicodec implementations in multiple languages. Tutorial You can find a hands-on tutorial on multicodec at the end of this post. Multistream Motivation Multicodecs are self-describing protocol/encoding streams. (Note that a file is a stream). It’s designed to address the perennial problem: I have a bitstring, what codec is the data coded with? Instead of arguing about which data serialization library is the best, let’s just pick the simplest one now, and build upgradability into the system. Choices are never forever. Eventually, all systems are changed. So, embrace this fact of reality, and build change into your system now. Multicodec frees you from the tyranny of past mistakes. Instead of trying to figure it all out beforehand, or continue using something that we can all agree no longer fits, why not allow the system to evolve and grow with the use cases of today, not yesterday. To decode an incoming stream of data, a program must either - know the format of the data a priori, or - learn the format from the data itself. (1) precludes running protocols that may provide one of many kinds of formats without prior agreement on which. multistream makes (2) neat using self-description. Moreover, this self-description allows straightforward layering of protocols without having to implement support in the parent (or encapsulating) one. How does the protocol work? multistream is a self-describing multiformat, it wraps other formats with a tiny bit of self-description: <varint-len>/<codec>\n<encoded-data> For example, let’s encode a JSON doc: // encode some json const buf = new Buffer(JSON.stringify({ hello: 'world' }))const prefixedBuf = multistream.addPrefix('json', buf) // prepends multicodec ('json') console.log(prefixedBuf) // <Buffer 06 2f 6a 73 6f 6e 2f 7b 22 68 65 6c 6c 6f 22 3a 22 77 6f 72 6c 64 22 7d>console.log(prefixedBuf.toString('hex')) // 062f6a736f6e2f7b2268656c6c6f223a22776f726c64227d// let's get the Codec and then get the data backconst codec = multicodec.getCodec(prefixedBuf) console.log(codec) // jsonconsole.log(multistream.rmPrefix(prefixedBuf).toString()) // "{ \"hello\": \"world\" } So, buf is: hex: 062f6a736f6e2f7b2268656c6c6f223a22776f726c64227d ascii: /json\n"{\"hello\":\"world\"}" Note that on the ASCII version, the varint at the beginning is not being represented, you should account that. The Protocol Path multistream allows us to specify different protocols in a universal namespace, that way being able to recognize, multiplex, and embed them easily. We use the notion of a path instead of an id because it is meant to be a Unix-friendly URI. A good path name should be decipherable — meaning that if some machine or developer — who has no idea about your protocol — encounters the path string, they should be able to look it up and resolve how to use it. An example of a good path name is: /bittorrent.org/1.0 An example of a great path name is: /ipfs/Qmaa4Rw81a3a1VEx4LxB7HADUAXvZFhCoRdBzsMZyZmqHD/ipfs.protocol /http/w3id.org/ipfs/1.1.0 These path names happen to be resolvable — not just in a “multistream muxer(e.g multistream-select)” but on the internet as a whole (provided the program (or OS) knows how to use the /ipfs and /http protocols). Now, let’s answer a few questions to understand its significance. Why Multistream? Today, people speak many languages and use common ones as an interface. But every “common language” has evolved over time, or even fundamentally switched. Why should we expect programs to be any different? And the reality is they’re not. Programs use a variety of encodings. Today we like JSON. Yesterday, XML was all the rage. XDR solved everything, but it’s kinda retro. Protobuf is still too cool for school. capnp (“cap and proto”) is for cerealization hipsters. The one problem is figuring out what we’re speaking. Humans are pretty smart, we pick up all sorts of languages over time. And we can always resort to pointing and grunting (the ASCII of humanity). Programs have a harder time. You can’t keep piping JSON into a protobuf decoder and hope they align. So we have to help them out a bit. That’s what multicodec is for. Full paths are too big for my use case, is there something smaller? Yes, check out multicodec. It uses a varint and a table to achieve the same thing. Implementations You can find a number of multistream implementations in multiple languages. Tutorial You can find a hands-on tutorial on multisteam at the end of this post. Multistream-Select Motivation Some protocols have sub-protocols or protocol-suites. Often, these sub-protocols are optional extensions. Selecting which protocol to use — or even knowing what is available to choose from — is not simple. What if there was a protocol that allowed mounting or nesting other protocols, and made it easy to select which protocol to use. (This is sort of like ports, but managed at the protocol level — not the OS — and human-readable). How does the Protocol work? The actual protocol is very simple. It is a multistream protocol itself, it has a multicodec header. And it has a set of other protocols available to be used by the remote side. The remote side must enter: > <multistream-header> > <multistream-header-for-whatever-protocol-that-we-want-to-speak> for - The <multistream-header-of-multistream>ensures a protocol selection is happening. - The <multistream-header-for-whatever-protocol-is-then-selected>hopefully describes a valid protocol listed. Otherwise, we return a na("not available") error: na\n# in hex (note the varint prefix = 3) # 0x036e610a for example: # open connection + send multicodec headers, inc for a protocol not/some-protocol-that-is-not-available# open connection + signal protocol not available. < /ipfs/QmdRKVhvzyATs3L6dosSb6w8hKuqfZK2SyPVqcYJ5VLYa2/multistream-select/0.3.0 < na# send a selection of a valid protocol + upgrade the conn and send traffic > /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-dht/0.2.3 > <dht-traffic> > ...# receive a selection of the protocol + sent traffic < /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-dht/0.2.3 < <dht-traffic> < ... Note 1: Every multistream message is a “length-prefixed-message”, which means that every message is prepended by a varint that describes the size of the message. Note 2: Every multistream message is appended by a \n character, this character is included in the byte count that is accounted for by the prepended varint. Listing It is also possible to “list” the available protocols. A list message is simply: ls\n# in hex (note the varint prefix = 3) 0x036c730a So a remote side asking for a protocol listing would look like this: # request <multistream-header-for-multistream-select> ls\n# response <varint-total-response-size-in-bytes><varint-number-of-protocols> <multicodec-of-available-protocol> <multicodec-of-available-protocol> <multicodec-of-available-protocol> ... For example # send request > /ipfs/QmdRKVhvzyATs3L6dosSb6w8hKuqfZK2SyPVqcYJ5VLYa2/multistream-select/0.3.0 > ls# get response < < /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-dht/1.0.0 < /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-bitswap/0.4.3 < /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-bitswap/1.0.0# send selection, upgrade connection, and start protocol traffic > /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-dht/0.2.3 > <ipfs-dht-request-0> > <ipfs-dht-request-1> > ...# receive selection, and upgraded protocol traffic. < /ipfs/QmVXZiejj3sXEmxuQxF2RjmFbEiE9w7T82xDn3uYNuhbFb/ipfs-dht/0.2.3 < <ipfs-dht-response-0> < <ipfs-dht-response-1> < ... Example # greeting > /http/multiproto.io/multistream-select/1.0 < /http/multiproto.io/multistream-select/1.0 # list available protocols > /http/multiproto.io/multistream-select/1.0 > ls < /http/google.com/spdy/3 < /http/w3c.org/http/1.1 < /http/w3c.org/http/2 < /http/bittorrent.org/1.2 < /http/git-scm.org/1.2 < /http/ipfs.io/exchange/bitswap/1 < /http/ipfs.io/routing/dht/2.0.2 < /http/ipfs.io/network/relay/0.5.2 # select protocol > /http/multiproto.io/multistream-select/1.0 > ls > /http/w3id.org/http/1.1 > GET / HTTP/1.1 > < /http/w3id.org/http/1.1 < HTTP/1.1 200 OK < Content-Type: text/html; charset=UTF-8 < Content-Length: 12 < < Hello World Implementations You can find a number of multistream-select implementations in multiple languages. Tutorial You can find a hands-on tutorial on multistream-select at the end of this post. Multigram Multigram operates on datagrams, which can be UDP packets, Ethernet frames, etc. and which are unreliable and unordered. All it does is prepend a field to the packet, which signifies the protocol of this packet. The endpoints of the connection can then use different packet handlers per protocol. As Multigram is WIP, I will not go through it in depth. But you want to track its development, you can visit here. Alright, now as we have covered “What”, let’s play with it a bit to get a taste of its power 🔥 Let’s Play with Multiformats 🔥🔥🔥 Here we will play a bit with Multihash, Multiaddr, and Multicodec. You can also find the complete code for this tutorial here. We will use the JS implementations for our tutorial. Project Setup Create a folder named multiformats_tut . Now, go into the folder. Installation Make sure that you have installed npm and nodejs on your system. Run this command. npm install multiaddr multihashes multibase Let’s write some code Multihash Create a file multihash_tut.js inside the folder. Now, run the code using node multihash_tut.js . You will see the same output that we added in the comments. Multiaddr Create a file multiaddr_tut.js inside the folder. Now run the code using node multiaddr_tut.js . You will see the same output that we added in the comments. Multibase Create a file multibase_tut.js inside the folder. Now run the code using node multibase_tut.js . You will see the same output that we added in the comments. Multicodec You can find multicodec tutorial here. Multistream You can find multistream tutorial here. Multistream-select You can find multistream-select tutorial here. Congratulations🎉🎉 You now have the power to Future-proof a lot of things. That’s it for this part. In the next part, we will explore Libp2p. You can check it out here: Thanks for reading ;) Learned something? Press and hold Senior blockchain developer and has worked on several blockchain platforms including Ethereum, Quorum, EOS, Nano, Hashgraph, IOTA etc. He is a Speaker, Writer and a drop-out from IIT Delhi. Want to learn more? Check out my previous articles. ConsensusPedia: An Encyclopedia of 30+ Consensus Algorithms A complete list/comparison of all consensus algorithms. hackernoon.com Getting Deep Into Ethereum: How Data Is Stored In Ethereum? In this post, we will see how states and transactions are stored in Ethereum and how it is different from Bitcoin. hackernoon.com ContractPedia: An Encyclopedia of 40+ Smart Contract Platforms A Complete Comparision of all Blockchain/DLT Platforms hackernoon.com Clap 50 times and follow me on Twitter: @vasa_develop
https://medium.com/hackernoon/understanding-ipfs-in-depth-4-6-what-is-multiformats-cf25eef83966
CC-MAIN-2020-40
en
refinedweb
Hi,! Now is the meat. :) To highlight where the questions are, I put "QUESTION:" at the beginning of the paragraph, below. Sorry if the detail is too long. I am using Mandrake Linux 10 (kernel = 2.6.3-7mdk, gcc = 3.3.2-6mdk) and boost library version 1.31.0 (official release). I compiled the *whole* boost library using bjam, as directed in the website. The first attempt to "wrap" my code with boost::python went fine. Here's the wrapper initialization code: BOOST_PYTHON_MODULE(HubbardGP) { using namespace boost::python; TBH_PY_DEBUG(("Initializing HubbardGP module interface\n")); class_<HubbardGP>("HubbardGP") .def("OpenFiles", &HubbardGP::OpenFiles) .def("Solve", &HubbardGP::Solve) .def("ReportResults", &HubbardGP::ReportResults) // .add_property("ndim", &HubbardGP::ndim_pyget) ; TBH_PY_DEBUG(("Done initializing HubbardGP module interface\n")); } Please disregard "TBH_PY_DEBUG" there--it just calls C's printf function to print the string on the screen.. QUESTION: Without the .addproperty() stuff (that's commented in the snippet above), the code runs fine. But when I added the .addproperty() line, a calamity happens: when I was about to instantiate HubbardGP object, the computer froze. When I checked using top(1), the python program tries to allocate a huge chunk of memory, which did NOT happen before the .addproperty() is added in my wrapper. I re-ran strace(1), and found that the program (either python or the boost_python lib, or remotely possibly my own code?) attempted to allocate ~590MB of memory. Here's the strace(1) output, when I forcibly LIMIT the amount of vmem available to the program to 250 MB only: [del] write(1, "Initializing HubbardGP module in"..., 40Initializing HubbardGP module interface) = 40 write(1, "Done initializing HubbardGP modu"..., 45Done initializing HubbardGP module interface) = 45 close(3) = 0 futex(0x804a998, FUTEX_WAKE, 1) = 0 write(1, "Creating an instance of HubbardG"..., 34Creating an instance of HubbardGP) = 34 > mmap2(NULL, 595271680, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = -1 ENOMEM (Cannot allocate memory) brk(0) = 0x80d6000 brk(0x2b888000) = 0x80d6000 > mmap2(NULL, 595406848, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = -1 ENOMEM (Cannot allocate memory) mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x87cb5000 munmap(0x87cb5000, 307200) = 0 munmap(0x87e00000, 741376) = 0 mprotect(0x87d00000, 135168, PROT_READ|PROT_WRITE) = 0 > mmap2(NULL, 595271680, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, > 0) = -1 ENOMEM (Cannot allocate memory) futex(0x401edaf4, FUTEX_WAKE, 2147483647) = 0 futex(0x402207d4, FUTEX_WAKE, 2147483647) = 0 write(2, "Traceback (most recent call last"..., 35Traceback (most recent call last):) = 35 open("test-lattice-params.py", O_RDONLY|O_LARGEFILE) = 3 write(2, " File \"test-lattice-params.py\","..., 46 File "test-lattice-params.py", line 4, in ?) = 46 fstat64(3, {st_mode=S_IFREG|0644, st_size=267, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xab5b2000 read(3, "import HubbardGP\n\nprint \"Creatin"..., 4096) = 267 write(2, " ", 4 ) = 4 write(2, "H = HubbardGP.HubbardGP()\n", 26H = HubbardGP.HubbardGP() ) = 26 close(3) = 0 munmap(0xab5b2000, 4096) = 0 write(2, "MemoryError", 11MemoryError) = 11 write(2, "\n", 1 ) = 1 [del] There are 3 points when it tries to allocate huge amount of memory. Now, I'm too new to both python and boost::python. Could you help me with this? I don't believe that my code was the one doing the mess. I tend to think that somehow the compiled boost_python code was acting up here. But I can't debug the code easily, as it involves running python itself in the debugger. How do you debug such a problem?. I tried once to regenerate the problem using a much smaller testcase, but the problem (huge mmap2) didn't show up. As a reference, here's my python test code: import HubbardGP print "Creating an instance of HubbardGP" H = HubbardGP.HubbardGP() print "Done, now opening files" x = H.OpenFiles("/tmp/file1.txt", "/tmp/file2.txt", "/tmp/file3.txt") print "x is %d" % x H.Solve() print "NOW REPORTING RESULTS:" H.ReportResults() I would appreciate if someone helps me out in this respect. The full source code of the wrapped C++ object is available if you need to look into it. But it's way too large to post here. Thanks, Wirawan
https://mail.python.org/pipermail/cplusplus-sig/2004-July/007300.html
CC-MAIN-2020-40
en
refinedweb
elijah reid5,147 Points what am I doing wrong it looks right but will not process so am I missing an item using System; namespace Treehouse.CodeChallenges { public class Program { public Func<int,int> Square = delegate (int number) {return number*number;} } } 2 Answers Steven Parker203,115 Points You forgot to end your delegate assignment with a semicolon. Also, the "check work" seems to have it's own ideas about syntax. It seems to want a space between the int specs of the Func, and it also wants spaces around the multiply operator. public Func<int, int> Square = delegate (int number) {return number * number;}; You might want to report the space issue as a bug to Support, it might even get you a "special Exterminator badge". elijah reid5,147 Points thank you, damn spaces elijah reid5,147 Points elijah reid5,147 Points Challenge Task 1 of 3 In the Program class, declare a public Func field named Square that takes an int and returns an int. Use an anonymous method to assign a delegate that takes an int parameter named number and returns the result of number * number.
https://teamtreehouse.com/community/what-am-i-doing-wrong-it-looks-right-but-will-not-process
CC-MAIN-2020-40
en
refinedweb
Optimizing and changing size of Xpresso nodes On 17/11/2017 at 03:02, xxxxxxxx wrote: Hi guys, Does anyone know if it's possible to optimize or change sizes of Xpresso nodes in Python? Can't find it in the Python documentation, but did find something related for C++, I think! Thank you very much! Andre On 17/11/2017 at 04:21, xxxxxxxx wrote: Hello don't know about Optimize command if you can acces it or not. For the size there is no proper c4d wrapper arround it but you can edit it using this (note sure if it's an acceptable way from Maxon, I mean I don't know if it can change in the future or not) import c4d def main() : nodeMaster = doc.GetActiveTag().GetNodeMaster() node = nodeMaster.GetRoot().GetDown() # Set Y size node.GetDataInstance().GetContainerInstance(1001).GetContainerInstance(1000)[109] = 50 # Set X size node.GetDataInstance().GetContainerInstance(1001).GetContainerInstance(1000)[108] = 100 c4d.EventAdd() # iterate bc maybe you want to modify other thing like position [100] and [101] bc = node.GetDataInstance()[1001][1000] for index, value in bc: try : print "Index: %i, Value: %s" % (index, str(value)) except: pass if __name__=='__main__': main() Hope it's help ! :) On 17/11/2017 at 08:36, xxxxxxxx wrote: Hi Graphos, Many thanks again for your awesome help! It works, but not with all the nodes that I want. I think it may be related to the IDs, which to be honest not sure how you got it in the first place. I will try and explain my problem in more detail below: I have an Xgroup parent node which encloses all its child nodes. The code you gave to change it's size unfortunately didn't work for the Xgroup but worked for all it's child nodes. Does this has something to do with the ID? XGroup (Default) |_ MainGrp <--- This is the one I want to affect it's size initially |_ Child_1 |_ Child_2 |_ Child_3 If I get the ID by calling the node .GetOwnerID() gives me 1001142. The IDs 1001 and 1000 that you gave in your example, where did you get it from and why that specific order please? node.GetDataInstance().GetContainerInstance(1001).GetContainerInstance(1000)[109] = 50 Apologies for the all the questions, but having a hard time understanding it . Appreciate your awesome effort and thank you again! Andre On 18/11/2017 at 04:11, xxxxxxxx wrote: As you may know or not. A node inside xpresso is represented by a c4d.modules.graphview.GvNode wich inherite from a BaseList2D (like all other Object within a scene, like a cube for exemple, which is a BeseObject that inherite from a BaseList2D) So that mean you can iterate an Xpresso graph (a c4d.modules.graphview.GvNodeMaster) as you iterate an object list. Using GetDown() / GetNext(). When you fully understand it you can now read Recursive hierarchy iteration or Non-recursive hierarchy iteration So now you can easily affect only the node you want by doing selecting it by name for exemple if obj.GetName() == "_MainGrp()": Do you action. Of course you can use any way you want for select which node you want to resize. About how I do it. I suggest you to read BaseContainer Manual even if it's in C++ you may learn new things. So regarding a Node, as said before it's a BaseList2D, so like all BaseList2D it got a function called GetDataInstance which return a BaseContainer of the data. You can then iterate it using Bc iteration. And here a video that show how I did know the IDs that I have to use, just by testing. Hope it's help ! :) On 20/11/2017 at 01:06, xxxxxxxx wrote: Oh WOW! Thank you very much for your explanation and taking the time to make a freaking video. Will have a look now, but it definitely makes more sense. On 20/11/2017 at 05:13, xxxxxxxx wrote: Hi, unfortunately from MAXON's side, there's no officially supported way to achieve what's requested in this thread. gr4ph0s findings may work (actually not questioning this), just be aware, this is not officially recommended and might break anytime in future (e.g. by changed IDs). While Niklas is a very supportive member of this community and we appreciate him a lot, in this regard we disagree and I can only recommend to use the online docs published by MAXON (e.g. in order not to miss the latest fixes). The link in the first post to GvBodyDefaultSize in official documentation. Finally gr4ph0s had a link to the old C4D Programming blog. As we are not sure, how long that will be available anymore as it got replaced completely by developers.maxon.net, here's the link to Recursive Hierarchy Iteration. No need to feel sorry, nobody did anything wrong, no critique intended, just a few minor annotations and corrections from our side. On 20/11/2017 at 13:42, xxxxxxxx wrote: Hi Andreas, Thank you for your input and will keep your advice in mind. It's a big learning curve that hopefully will get less "curvier". Cheers!
https://plugincafe.maxon.net/topic/10463/13907_optimizing-and-changing-size-of-xpresso-nodes
CC-MAIN-2020-40
en
refinedweb
In this tutorial, we will cover how dictionary comprehension works in Python. It includes various examples which would help you to learn the concept of dictionary comprehension and how it is used in real-world scenarios. It is defined in curly braces { }. Each key is followed by a colon (:) and then values. You can also execute dictionary comprehension with just defining only one variable What is Dictionary?Dictionary is a data structure in python which is used to store data such that values are connected to their related key. Roughly it works very similar to SQL tables or data stored in statistical softwares. It has two main components - - Keys : Think about columns in tables. It must be unique (like column names cannot be duplicate) - Values : It is similar to rows in tables. It can be duplicate. Syntax of Dictionary d = {'a': [1,2], 'b': [3,4], 'c': [5,6]}To extract keys, values and structure of dictionary, you can submit the following commands. d.keys() # 'a', 'b', 'c' d.values() # [1, 2], [3, 4], [5, 6] d.items()Like R or SAS, you can create dataframe or dataset using pandas package in python. import pandas as pd pd.DataFrame(data=d) a b c 0 1 3 5 1 2 4 6 What is Dictionary Comprehension?Like List Comprehension, Dictionary Comprehension lets us to run for loopon dictionary with a single line of code. Both list and dictionary comprehension are a part of functional programming which aims to make coding more readable and create list and dictionary in a crisp way without explicitly using for loop.The difference between list and dictionary comprehension is that list comprehension creates list. Whereas dictionary comprehension creates dictionary. Syntax is also slightly different (refer the succeeding section). List is defined with square bracket [ ]whereas dictionary is created with { } Syntax of Dictionary Comprehension {key: value for (key, value) in iterable}Iterable is any python object in which you can loop over. For example, list, tuple or string. keys = ['a', 'b', 'c'] values = [1, 2, 3] {i:j for (i,j) in zip(keys, values)}It creates dictionary {'a': 1, 'b': 2, 'c': 3}. It can also be written without dictionary comprehension like dict(zip(keys, values)). You can also execute dictionary comprehension with just defining only one variable i. In the example below, we are taking square of i for assigning values in dictionary. range(5) returns 0 through 4 as indexing in python starts from 0 and excluding end point. If you want to know how dictionary comprehension is different from For Loop, refer the table below. Dictionary Comprehension d = {i:i**2 for i in range(5)} For Loop d = {} for i in range(5): d[i]=i**2 print(d) Output {0: 0, 1: 1, 2: 4, 3: 9, 4: 16} d.keys() returns [0, 1, 2, 3, 4] d.values() returns [0, 1, 4, 9, 16] Create dictionary containing alphabets as keysSuppose you want letters from 'a' through 'e' as keys in dictionary and digits from 0 through 4 as values. string.ascii_lowercase[:5]returns abcde. import string {i:j for (i, j) in zip(string.ascii_lowercase[:5], range(5))} {'a': 0, 'b': 1, 'c': 2, 'd': 3, 'e': 4} Create new dictionary out of existing dictionary dic.items()returns the whole structure of dictionary which comprises of both keys and values. In the following example, we are multiplying values of existing dictionary by 2 and building a new dictionary named new_dic. dic = {'a': 1, 'b': 2, 'c': 3, 'd': 4} new_dic = {i:j*2 for i,j in dic.items()} new_dic {'a': 2, 'b': 4, 'c': 6, 'd': 8} How to use IF statement in Dictionary ComprehensionHere we are applying conditional statement and considering values above 2. dic = {'a': 1, 'b': 2, 'c': 3, 'd': 4} {i:j*2 for (i,j) in dic.items() if j>2} Output {'c': 6, 'd': 8} IF-ELSE condition in Dictionary ComprehensionYou can apply if-else statement like we do in list comprehension. This example outlines how to show odd or even numbers in values in dictionary. {i:('even' if j%2==0 else 'odd') for (i,j) in dic.items()} {'a': 'odd', 'b': 'even', 'c': 'odd', 'd': 'even'} Use Enumerate Function in dictionary comprehensionEnumerate function runs on list, tuple or string and returns element and its index. list(enumerate(['a', 'b', 'c'])) Output [(0, 'a'), (1, 'b'), (2, 'c')]With the use of this function, we can create a dictionary with elements of the list as keys and the index as values. mylist = ['a', 'b', 'c'] {j:i for i,j in enumerate(mylist)} {'a': 0, 'b': 1, 'c': 2} Remove selected items from dictionarySuppose you have a dictionary containing cities name along with some values and you want to delete specific multiple items (let's say Delhi and London) from dictionary. In this example, i refers to keys of dictionary and d[i] evaluates to d[key]. For e.g. d['Mumbai] returns 221. d = {'Delhi': 121, 'Mumbai': 221, 'New York': 302, 'London': 250} {i:d[i] for i in d.keys() - {'Delhi','London'}}It returns these two items {'Mumbai': 221, 'New York': 302} Good tutorial deepanshu ! Thanks. Thanks for appreciation! Usefull for beginners Glad you found it useful Good one ... Good comprehension Cheers (y) Thanks a lot . I love dict comprehension now . Awesome!
https://www.listendata.com/2019/07/python-dictionary-comprehension.html
CC-MAIN-2020-40
en
refinedweb
US5124004A - Distillation process for ethanol - Google PatentsDistillation process for ethanol Download PDF Info - Publication number - US5124004AUS5124004A US07/475,732 US47573290A US5124004A US 5124004 A US5124004 A US 5124004A US 47573290 A US47573290 A US 47573290A US 5124004 A US5124004 A US 5124004A - Authority - US - United States - Prior art keywords - heat - vapor - liquid - distillation - withdrawn - Title 160 - 239000004346 Ethanol Substances 0 Claims Description Title 69 - Title 71 - 238000000895 extractive distillation Methods 0 Abstract Claims Description 19 - 239000007788 liquids Substances 0 Abstract Claims Description 117 - 239000000203 mixtures Substances 0 Abstract Claims Description 65 - 238000000034 methods Methods 0 Description Title 69 - 239000004452 animal feeding substances Substances 0 Claims Description 58 - 150000001875 compounds Chemical class 0 Abstract Description 6 - 238000009833 condensation Methods 0 Claims Description 11 - -1 ethanol-water Chemical compound 0 Abstract Description 12 - 238000001704 evaporation Methods 0 Claims Description 8 - 230000035611 feeding Effects 0 Claims Description 58 - 230000001965 increased Effects 0 Claims Description 11 - 238000005365 production Methods 0 Abstract Description 19 - 239000000047 products Substances 0 Claims Description 26 - 150000003839 salts Chemical class 0 Claims Description 36 - 238000000926 separation method Methods 0 Abstract Description 44 - 239000011780 sodium chloride Substances 0 Claims Description 36 - 239000004788 BTU Substances 0 Description 12 - 241000894006 Bacteria Species 0 Description 2 - 229930000340 Butanol Natural products 0 Description 1 - 241000193403 Clostridium Species 0 Description 1 - 229930006649 D-Glucose Natural products 0 Description 2 - 241000196324 Embryophyta Species 0 Description 9 - 101700053403 LATE family Proteins 0 Description 1 - 235000000365 Oenanthe javanica Nutrition 0 Description 1 - 240000008881 Oenanthe javanica Species 0 Description 1 - 241000183024 Populus tremula Species 0 Description 12 - 229960004109 Potassium Acetate Drugs 0 Description 8 - SCVFZCLFOSHCOH-UHFFFAOYSA-M Potassium= [K+].CC([O-])=O SCVFZCLFOSHCOH-UHFFFAOYSA-M 0 Description 8 - 206010037660 Pyrexia Diseases 0 Description 1 - 240000004808 Saccharomyces cerevisiae Species 0 Description 6 - 239000008066 acetone Substances 0 DescriptionMTQuMzg1IDE2NS43NDYsMMTQuMzg1IDEzNC4yNTQsM8000007792 addition Methods 0 Description 10 - 238000004458 analytical methods Methods 0 Description 3 - 230000003935 attention Effects 0 Description 1 - 230000003190 augmentative Effects 0 Description 1 - 230000001580 bacterial Effects 0 Description 1 - UHOVQNZJYSORNB-UHFFFAOYSA-NAyMTguMTgygyLDI2OC4wOTjIwOTgsMjM2Ljc0NCA0Ny40ODI1LDE1NC4wNgxLjODEuODE4MiwzMS45MDU2IDIxOCjI3Myw1OS4xNzg0IDE5Ny43MjcsNTkuMTcIgNjEuMzE4Miw3NS40MS4wNDY2LDQzLjE1NTQgNTcuNTIzOSw2NiQyIDYxLjMxzE4Miw3NS40NjAxIDIyLjY4MTgsNzUuNj40NzczLDE2LjI2NzIgNTUuNTIyNywxNi4yN1=CC=CC=C1 UHOVQNZJYSORNB-UHFFFAOYSA-N 0 Description 5 - 230000015572 biosynthetic process Effects 0 Description 1 - 238000009835 boiling Methods 0 Description 1 - 235000010633 broth Nutrition 0 Description 7 - 238000004364 calculation methods Methods920002678 cellulose Polymers 0 Description 1 - 239000001913 cellulose Substances 0 Description 1 - 238000005119 centrifugation Methods 0 Description 1 - 238000006243 chemical reaction Methods 0 Description 1 - 239000007795 chemical reaction product Substances 0 Description 4 - 239000003795 chemical substance by application Substances 0 Description 5 - 230000000052 comparative effects Effects 0 Description 1 - 239000002131 composite material Substances 0 Description 1 - 238000007906 compression Methods 0 Description 6 - 238000005094 computer simulation Methods 0 Description 8 - 239000000498 cooling water Substances 0 Description 1 - 230000000875 corresponding Effects 0 Description 8 - 230000003247 decreasing Effects 0 Description 3 - 230000018109 developmental process Effects 0 Description 2 - 238000009826 distribution Methods 0 Description 1 - 230000000694 effects Effects 0 Description 14 - 238000004134 energy conservation Methods 0 Description 1 - 238000005265 energy consumption Methods 0 Description 5 - 238000005516 engineering processes Methods 0 Description 3 - 238000006047 enzymatic hydrolysis Methods 0 Description 1 - 230000002349 favourable Effects 0 Description 1 - 230000004151 fermentation Effects 0 Description 36 - 238000000855 fermentation Methods 0 Description 35 - 238000001914 filtration Methods 0 Description 1 - 238000005755 formation Methods 0 Description 1 - 238000005194 fractionation Methods 0 Description 1 - 239000000727 fractions Substances 0 Description 15 - 239000007789 gases Substances 0 Description 1 - 230000014509 gene expression Effects 0 Description 4 - 239000008103 glucose Substances 0 Description 2 - 239000000413 hydrolysate Substances 0 Description 1 - 230000002401 inhibitory effects Effects 0 Description 8 - 238000009434 installation Methods 0 Description 2 - 238000009413 insulation Methods 0 Description 1 - 239000011133 lead Substances 0 Description 2 - 239000010912 leaf Substances 0 Description 2 - 230000000670 limiting Effects 0 Description 6 - 239000007791 liquid phases Substances 0 Description 2 - 239000000463 materials Substances 0 Description 10 - 238000005259 measurements Methods 0 Description 1 - 230000002503 metabolic Effects 0 Description 5 - 238000006011 modification Methods 0 Description 2 - 230000004048 modification Effects 0 Description 2 - 238000009740 moulding (composite fabrication) Methods 0 Description 1 - LRHPLDYGYMQRHN-UHFFFAOYSA-N n-but43NzYgNzEuNzQ4OCNDg4LDEzMy4yMjQgMTI5Ljg2MSwxNjODYxLDE2Ni43NzYgMTg3Ljk3NCcuOTc0LDEzMy4yMjQgMjEwLjUzQ2LjI1IDIzMy4wOTYsMTU5LjI41ODEnIHk9JzE3NC4yNzUzMSAxOS44Mjg4LDM3LjIgyODgsMzcuMjQ2OSAzNi4yOTQsNDYjk0LDQ2Ljc1MzEgNTIuNzU5MiwzNy4y43NTkyLDM3LjI0NjkgNTguMjQ3Niw0MC40MTc2LDQwLjQxNTYgNjMuNzM2LDQzLjU41MTg0JyB5PSc0OS45MjECCCO LRHPLDYGYMQRHN-UHFFFAOYSA-N 0 Description 1 - 239000003345 natural gas Substances 0 Description 1 - 235000015097 nutrients Nutrition 0 Description 1 - 238000005457 optimization Methods 0 Description 1 - 230000036961 partial Effects 0 Description 2 - 239000003208 petroleum Substances 0 Description 1 - 235000011056 potassium acetate Nutrition 0 Description 8 - 239000001624 potassium acetate Substances 0 Description 8 - 230000003405 preventing Effects 0 Description 1 - 230000000135 prohibitive Effects 0 Description 1 - 238000000746 purification Methods 0 Description 4 - 230000002829 reduced Effects 0 Description 4 - 230000001603 reducing Effects 0 Description 9 - 238000006722 reduction reaction Methods 0 Description 6 - 230000002441 reversible Effects 0 Description 1 - 239000011833 salt mixtures Substances 0 Description 1 - 239000011555 saturated liquid Substances 0 Description 1 - 230000035945 sensitivity Effects 0 Description 1 - 238000004088 simulation Methods 0 Description 10 - 239000007787 solids Substances 0 Description 2 - 239000002904 solvents Substances 0 Description 2 - 241000894007 species Species 0 Description 1 - 239000007921 sprays Substances 0 Description 1 - 239000010902 straw Substances 0 Description 1 - 239000000126 substances Substances 0 Description 2 - 239000000758 substrates Substances 0 Description 18 - 238000000844 transformation Methods 0 Description 1 - 230000001131 transforming Effects 0 Description 2 - 238000009834 vaporization Methods 0 Description 2 - 239000002699 waste material Substances 0 Description 1 - 238000005303 weighing Methods 0/143—Fractional distillation or use of a fractionation or rectification column by two or more of a fractionation, separation or rectification step - B01D3/146—Multiple effect distillation - B—PERFORMING OPERATIONS; TRANSPORTING - B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL - B01D—SEPARATION - B01D1/00—Evaporating - B01D1/28—Evaporating with vapour compression - B01D1/284—Special features relating to the compressed vapour - B01D1/2856—The compressed vapour is used for heating a reboiler or a heat exchanger outside an evaporator - B—PERFORMING OPERATIONS; TRANSPORTING - B01—PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL - B01D—SEPARATION - B01D3/00—Distillation or related exchange processes in which liquids are contacted with gaseous media, e.g. stripping - B01D3/007—Energy recuperation; Heat pumps -/50—Systems profiting of external or internal conditions - Y02B30/52—Heat recovery pumps, i.e. heat pump based systems or units able to transfer the thermal energy from one area of the premises or part of the facilities to a different one, improving the overall/04—Heat/14—Ejector-ed/20—Power/90—Particular type of heating Abstract Description This is a continuation-in-part of Ser. No. 06/897,986, filed Aug. 19, 1986 now U.S. Pat. No. 4,961,826, which is a divisional of Ser. No. 06/829,549, filed Feb. 13, 1986, now U.S. Pat. No. 4,626,321, which was a continuation of Ser. No. 06/525,102, filed Aug. 22, 1983, now abandoned. The present invention relates to distillation systems and to methods of operation of the systems. Attention is called to U.S. Pat. Nos. 4,234,391 (Seader); 4,308,107 (Markfort); 3,558,438 (Schoenbeck); 4,025,398 (Haselden); 4,303,478 (Field); 4,359,533 (Wilke); and 4,308,106 (Mannsfield), as well as to chapter 7, ELEMENTS OF DISTILLATION, Robinson and Gilliland (McGraw Hill); chapter 5, SEPARATION PROCESSES, King (McGraw Hill); DISTILLATION, PRINCIPLES AND DESIGN PROCEDURES, Hengstebeck (Robert E. Krieger). See also, technical articles: "Distillation With Secondary Reflux and Vaporization: A Comparative Evaluation," Mah et al.; "Extractive Distillation Employing a Dissolved Salt as Separating Agent," Cook and Furter; "Vapor Re-Use Process," Othmer; "Energy Requirements in the Separation by Mixtures By Distillation," Flower et al., "Distillation Columns With Vapor Recompression," Danzinger; "Heat Pumps in Distillation," Null; "The Heat Pump in Multicomponent Distillation," Freshwater; "Process and Design for Energy Conservation," Null et al.; "Novel Separation Technology May Supplant Distillation Towers," O'Sullivan; "Two Component Equilibrium Curves For Multicomponent Fractionation," Jenny and Cicalese. Also: "Microcalorimetry: A Tool For the Study of the Biodegradability of Straw by Mixed Bacterial Cultures," Fardeau et al.; and "Rapid Ethanol Fermentation of Cellulose Hydrolysate. II. Product and Substrate Inhibition and Optimization of Fermentor Design," Ghose and Tyagi; "Recovering Chemical Products From Dilute Fermentation Broths," Busche. Distillation is a technique for separating mixtures based on differing component volatilities. One of the most commonly used methods of separation, distillation is widely used in the petroleum, chemical, and natural gas liquids industries. The energy requirement of distillation is significant by any measure, accounting for three percent of the total U.S. energy consumption by one estimate. Thus, a meaningful reduction in the energy required for separation of mixtures by distillation would have a favorable impact on energy consumption and economics in an industrialized country. In addition, a more efficient method of distillation may be expected to make feasible processes for which separation costs are now prohibitive. The innovative techniques described herein involve moving heat within a distillation system between internal liquid and vapor streams using a heat pump, and returning removed mass streams having undergone a phase change in an optimal way. When internal streams are manipulated in this way both material streams and latent heat may be used more than once in a particular section of the distillation system. In addition, the ratio of internal liquid and vapor flows, or internal-reflux ratio (i.e., L/V herein), can be varied almost at will, often with a small investment of work due to small temperature differences between heat sources and sinks, thus circumventing the "pinch regions" which affect the energy and stage requirements of most distillation systems. The inventors refer to the applications of this inventive concept collectively as distillation with intermediate heat pumps and optimum sidestream return or IHOSR distillation. As will be discussed, IHOSR distillation appears to have special benefits when applied to dilute solutions. Since fermentation products are typically dilute, and since fermentation is frequently a practical way to convert an available substrate into a desired product, albeit at low concentration, application of IHOSR enhanced distillation to recovery of dilute products from fermentation broths appears attractive in that it may be expected to make fermentative production of volatile materials more economically feasible. This is especially so because integration of certain material and energy flows in fermentation and distillation can result in beneficial results for both processes. Specifically, the productivity of the fermentation can be increased due to alleviation of end-product inhibition and/or increasing the residence time of cells and substrate relative to that based on the feed, and the energy requirements for the distillation may be lowered by making use of the metabolic heat released during fermentation. In many cases, IHOSR distillation accomplishes more energy-efficient distillation given the constraints of the vapor-liquid equilibrium relationship for a particular mixture than does normal adiabatic distillation. Under some conditions, such as azeotrope-forming mixtures or mixtures with a low volatility ratio in a particular concentration range, there is strong incentive to change the vapor-liquid equilibrium relationship. This incentive is especially strong given the flexibility conferred by the IHOSR technique with respect to the internal reflux ratio. Changing the vapor-liquid equilibrium relationship can be achieved by changing the pressure or by adding additional component(s) to the mixture to achieve an extractive or azeotropic distillation. Combination of techniques which alter phase equilibrium with IHOSR distillation may be done either in a single distillation system or in several systems. Recovery of ethanol generated by fermentation presents several of the problems and opportunities discussed above: a separation involving dilute solutions, fermentative production with product inhibition, and a mixture which forms an azeotrope and has a low relative volatility ratio in a given concentration range. Ethanol production and recovery thus constitute an attractive potential application of the IHOSR techniques and combination of the IHOSR techniques with fermentation and altered phase-equilibrium distillation techniques; this process will frequently be used in the examples discussed below. It is the object of the present invention to provide: most generally, (1) process and apparatus concepts which reduce the energy required for the separation of mixtures by distillation; more specifically, (2) applications of the process and apparatus concepts referred to wherein the vapor-liquid equilibrium relationship is altered to achieve more energy-efficient separation and/or to achieve compositions not otherwise possible in a single distillation system due to the formation of azeotropes; (3) applications of the process and apparatus concepts referred to wherein fermentative production of materials and recovery of the materials is integrated achieving some or all of the benefits above; and most specifically, (4) the particular application of the process and apparatus concepts descriped in (1) and the applications of those concepts as described in (2) and/or (3) to production by fermentation and recovery of ethanol. The foregoing objects are achieved, generally, in a distillation system that includes a heat pump using a vapor stream from within the distillation system as the heat source and a liquid stream from within the distillation system as the heat sink, wherein at least one of the vapor stream and the liquid stream is withdrawn from the phase-contacting region of the system at a temperature intermediate between the highest and the lowest temperature in the system, and wherein at least one of the vapor stream and the liquid stream withdrawn from the phase-contacting region of the distillation system is returned to the distillation system at a point with a temperature different from that at which it was withdrawn, and wherein all withdrawn streams are returned to the distillation system at a point such that material removed in the liquid phase is returned at a point in the system with a temperature at least that at the point of liquid withdrawal and material removed in the vapor phase is returned at a point in the system with a temperature at most that at the point of vapor withdrawal. Heat sources and sinks may often be selected with small temperature difference between them, thus allowing heat to be pumped with high efficiency. In addition, the return of streams to the correct point in the column can allow significant reductions in the heat which must be supplied to the distillation system. The invention is hereinafter described with reference to the accompanying drawings in which: FIGS. 1A, 2A, 3A, and 4A show apparatus to provide distillation with intermediate heat pumps and optimum sidestream return (IHOSR); FIGS. 1B, 1C, 2B, 3B, 3C, 4B, 4C, 5B, and 5C show graphically the influence of IHOSR distillation using the McCabeTheile method of analysis, FIGS. 1B and 1C corresponding with the apparatus in FIG. 1A, FIG. 2B corresponding with the apparatus of FIG. 2A, FIGS. 3B and 3C corresponding with the apparatus of FIG. 3A, FIGS. 4B and 4C corresponding with the apparatus of FIG. 4A, FIG. 5B, and FIG. 5C corresponding with the apparatus of FIG. 5A. FIG. 5A shows apparatus that includes an IHOSR distillation system in series with a system adapted to permit addition of an extractive agent in an original configuration with extensive heat integration; FIGS. 6A, 6B, 7, 8A, and 8B show apparatus to combine IHOSR distillation with fermentative production of volatile compounds; FIG. 9 is a schematic flow diagram of the Alternative IHOSR Process particularly adapted for the recovery of ethanol from dilute ethanol/water mixtures. FIG. 10 is a conceptual block diagram of the Alternative IHOSR Process and the Aspen models used in a simulation. FIG. 11 is a graphical comparison of the predicted and experimental vapor compositions for the system ethanol-water-potassium acetate at atmospheric pressure. FIG. 12 is a plot of the profiles of ethanol concentration in the vapor phase and tempertaure, for the three distillation columns in the Alternative IHOSR Process. FIG. 13 is a graphical representation of the relative contribution of major cost components of the separation cost index for the Alternative IHOSR Process. FIG. 14 is a graphical comparison of the separation cost index for the Alternative IHOSR Process in a conventional benzene azeotropic distillation process. FIG. 15 is a graphical comparision of the energy consumption of the Alternative IHOSR Process in a conventional benzene azeotropic distillation. FIG. 16 is a plot of a parameter values required to match the separation cost indices of the Alternative IHOSR Process in a conventional benzene azeotropic distillation, as a function of ethanol concentration in the feed. In order to place the explanation hereinafter in context, there now follows a preliminary discussion of the invention mostly with reference to FIGS. 1A, 1B, and 1C for illustrative purposes. A review of the figures will reveal a certain repetition of mechanisms. In order to simplify the explanations of the various mechanisms, the same or similar designations are applied to mechanisms that perform the same or similar function. Thus, for example, the label 101A in FIG. 1A designates distillation apparatus of the present invention employing one or more compressors 2, 2A . . . ; in FIG. 2A the distillation apparatus is labeled 101B and the compressor is again labeled 2. In these explanations, emphasis is placed on a binary mixture introduced as a feed stock at 15; but the concepts are useful for distilling multi-component mixtures as well. Workers in this art will recognize FIGS. 1B and 1C to be McCabe-Theile diagrams in which the curve designated 30 is the equilibrium curve and the line designated 31 is the equality line; the feature labeled 32A is a sequence of line segments called the operating line; the ordinate (Y) in each graph represents the mole fraction of the more volatile component, hereafter referred to as "mole fraction," in the vapor and the abscissa (X) represents the mole fraction of the more volatile component in the liquid; Xf represents the mole fraction of the more volatile component in the input feed in FIG. 1B, 1C and later figures. The present invention is directed to reducing the reboiler heat requirement by manipulating internal reflux ratios, which are the slopes of the segments of the operating line, e.g., the slope of the operating line 32A between points 33A and 33B in FIG. 1B or the slopes of the operating line 32B in FIG. 1C, between points 34A and 34B and/or between points 34B and 34C. The distillation apparatus 101A employs intermediate heat pump with optimum sidestream return (IHOSR), a concept which is addressed above and later in some detail. Briefly, IHOSR distillation employs one or more heat pumps (which may be driven by compressors) using a vapor stream V in FIG. 1A withdrawn at 3A from within the distillation system (i.e., from among the distillation column marked 1 the reboiler marked 5A, and the condensor marked 12) as a heat source and a liquid stream withdrawn either from the column at 4A or elsewhere from within the distillation system as a heat sink, the heat-source vapor and the heat-sink liquid being selected so that the heat-sink liquid would have a higher temperature than liquid in equilibrium with the heat-source vapor if both liquids were at the same pressure, with heat exchange resulting in reciprocal phase change where condensation of the heat-source vapor V is accompanied by evaporation of the heat-sink liquid. To explain the concept of the previous sentence in the context of FIG. 1A, the heat pump includes the compressor 2 and the reboiler 5A that has internal heat-exchange coils 6A and 7. The coil 7, as is conventional, receives heat, usually via steam, from an outside source (not shown) to boil the liquid in the reboiler 5A. In the system shown in FIG. 1A, however, the steam heat input to the reboiler 5A is augmented by energy extracted from the vapor V which is compressed at 2 (to increase its temperature) and then condensed at 6A to extract energy therefrom and effect a phase change to a liquid L which is returned to the column 1 at 9 through a pressure reducing valve 8. Liquid in the reboiler 5A is vaporized and introduced at 13A to the bottom of the column 1. For the generalized IHOSR system, at least one of the vapor stream V and the liquid stream (at 4A in the illustrative example shown in FIG. 1A) is withdrawn from within the phase-contacting region (i.e., either from the stripping region shown at 10 or the rectifying region shown at 11 in FIG. 1A; the column 1 may be a plate column or a packed column or some other structure) of the distillation apparatus 101A and has a temperature intermediate between the highest temperature (i.e., the temperature of the reboiler 5A) and the lowest temperatures (i.e., the temperature of the condenser marked 12). At least one of the streams withdrawn from the phase-contacting region of the distillation system point of vapor withdrawal. And all withdrawn streams temperature of vapor withdrawal. (In a conventional system, temperature decreases with height in the column.) This general description may be shown to apply to the specific case of FIG. 1A as follows. The vapor is withdrawn at 3A in FIG. 1A from the phase-contacting region at a temperature intermediate between the temperature in the condenser 12A, and in the reboiler 5A; the vapor is condensed as it donates heat to the reboiler liquid, the heat sink for this case, and the stream withdrawn as vapor is returned at a point in the phase-contacting system 9 with a lower temperature than the point at which it was withdrawn. In FIG. 1A, the heat-sink liquid was never withdrawn from the system, however other embodiments of the IHOSR technique involve removal of both vapor and liquid, or liquid only, accounting for the use of the expression "at least" in reference to withdrawn streams in the general description. The preferred point in the distillation system to return a sidestream, or to introduce any feed for that matter, is the point at which the compositions of the vapor in the returned stream, if any vapor is present in the returned stream, is as close as possible to the composition of the vapor within the system at the return point, and the composition of the liquid in the return stream, if any liquid is present in the return stream, is as close as possible to the composition of the liquid within the system at the return point. Sidestream return in this manner minimizes the number of stages required for separation. It may be noted that flow rate of a returned stream relative to the flow rate of the distillate leaving the system has an influence on internal reflux ratios, and so the compositions of liquid and vapor passing each other is at a given point, in the phase-contacting device. The maximum practical flow through a given sidestream is determined by the condition where the sidestream is returned at the preferred point as described above, and where the compositions of passing liquid and vapor streams within the distillation system in the immediate vicinity of the return point are as close as practically possible to being in equilibrium given the constraint of a reasonable number of stages in nearly pinched regions occurring either in the region of stream return or elsewhere. In the context of FIG. 1A the flow rate of the removed stream (e.g., the vapor stream V removed at 3A) relative to the flow rate of the distillate leaving the system at 25 (or the flow rate of the vapor in the phase-contacting portion) must be such that the desirable compositions within the system (between withdrawal at 3A and reintroduction at 9), specified above, are achieved; said another way, the volume of vapor withdrawn at 3A and returned (as a condensate) at 9 must be enough to effect meaningful changes in the slope of the operating line between 3A and 9 in the column 1A (see, for example, the slope of the operating line segment between points 33A and 33B in FIG. 1B). Some more general considerations are now given. A distillation system consists of mechanisms to facilitate mass transfer between opposite phases moving in a counter-current fashion, a mechanism to provide a vapor rich in the more volatile component(s) and a mechanism to provide a liquid rich in the less volatile component(s). Ordinarily, these functions are provided by a column, a reboiler, and a condenser/stream splitter, respectively. However, quite different arrangements have also been proposed, for example, by Markfort, Schoenbeck and O'Sullivan. The generic term "distillation system" is used herein to refer to a collection of phase-contacting device(s), device(s) for providing or introducing vapor, and device(s) for providing or introducing liquid which accomplish the same end as a conventional system composed of a column, a reboiler, and a condenser/stream splitter. In the following discussion and subsequent claims, heat exchange is considered to occur within the distillation system when it takes place within the phase-contacting device, for example, the column 1 in FIG. 1A and later figures, the terminal condenser 12 (at the lowest temperature in the system), the terminal reboiler 5A in FIG. 1A, 5B in FIG. 2A, etc. (at the highest temperature in the system), or their equivalents in an unconventional system. Heat exchange is considered to occur outside the distillation system when it takes place at points other than the phase-contacting device, the terminal condenser, the terminal reboiler, or their equivalents. Compression of vapors involved in a distillation is considered to occur outside the distillation system. In order to distinguish from "distillation system" as just defined and the larger structure that includes further mechanisms needed to effect distillation, the term "distillation apparatus" is used herein to denote apparatus that includes at least one "distillation system" but includes other mechanisms as well: in FIGS. 1A, 2A, 3A, 4A, 5A, 6A, 6B, 7, 8A, and 8B, "distillation apparatus" is designated 101A to 101J, respectively. Within a distillation system, or a portion of a distillation system, at relatively constant pressure, the temperature of a liquid or vapor stream within the system is inversely related to the volatility of the particular liquid or vapor stream. Thus, in a normal distillation column, the volatility of both liquid and vapor streams increases with height and the temperature decreases with height. In the present description, temperature of the distillation system at a particular point is used as a reference indicating the volatility at that point relative to other points in the column. The phase-contacting portion of a distillation system may be idealized as having constant molar enthalpies of vapor and liquid throughout, constant temperature at any section perpendicular to the primary direction of liquid and vapor flows, and no unintended heat exchange with the environment. In practice, mixtures seldom have constant molar enthalpies of vapor and liquid at all compositions, passing liquid and vapor streams which are not in equilibrium have slight temperature differences, and a small amount of heat is exchanged with the environment due to imperfect insulation. However, these idealizations greatly simplify analysis and presentation, they seldom lead to conclusions which differ dramatically from real situations, and deviations from these ideal situations are well described in the literature. For the sake of simplicity and clarity these idealizations will be made in this discussion and the claims that follow. The ratio between liquid and vapor flows, or internal-reflux ratio, within the phase-contacting portion of a distillation system is a critical parameter in design of distillation systems because it relates the compositions of opposite phases as they pass one another at a cross-section taken perpendicularly to the primary direction of flow. This relationship is given by equations (1) and (2) Y.sub.i,r =X.sub.i,d -(L/Y).sub.r (X.sub.i,d -X.sub.i,r) (1) Y.sub.i,s =X.sub.i,b +(L/V).sub.s (X.sub.i,s -X.sub.i,b) (2) where Yi is the mole fraction of component i in the vapor, Xi is the mole fraction of component i in the liquid, L/V is the internal-reflux ratio, the subscripts d and b denote distillate and bottoms, respectively, and the subscripts r and s denote particular points in the rectifying and stripping sections, respectively. At the condition where Yi and Xi of passing streams are in equilibrium at some point within the phase-contacting device, the internal-reflux ratio is at its limiting value because an infinite area of contact is necessary for mass transfer to occur between phases at equilibrium. This condition, referred to as a "pinched" condition, with Xi, Yi, the temperature and pressure defining a "pinched point," is normally approached at one point in a binary separation, and at two points in a multicomponent separation. In practice, the extent to which the phase-contacting device is made to approach the pinched condition is generally a result of weighing the reduced energy consumption and the increased number of stages which accompany operation near the pinched condition. If constant molar enthalpies and sections of the phase-contacting device with constant internal-reflux ratios are assumed, it is always possible to express the reboiler heat duty as a function of internal-reflux ratios and often as a function of a single internal-reflux ratio if the proper control volume is chosen. For example, in a conventional column with a feed at its saturation temperature, the reboiler heat requirements per mole distillate, qreb, can be expressed in terms of the internal-reflux ratio in the rectifying section, (L/V)r ##EQU1## λ=latent heat of vaporization fv =molar fraction of vapor in the feed F/D=ratio of the molar flow rates of the feed and distillate It will be noted that when (L/V)r is at its limiting, in this case minimum, value, qreb is also minimized. In general, for a control volume including the terminal reboiler and a portion of the phase-contacting device, qreb can be calculated according to q.sub.reb =λ(Σ(V/D).sub.out,i -Σ(V/D).sub.in,j)-Q.sub.aux /D-W (4) (V/D)out,i =the ratio of molar flows of vapor stream i leaving control volume to the distillate (V/D)in,j =the ratio of molar flows of vapor stream j entering the control volume to the distillate Qaux /D=the net heat flow, other than to the reboiler, per mole distillate across the control volume boundary with heat flow entering the control volume defined to be positive. W=shaft work crossing the control volume boundary per mole distillate All V/D ratios can be expressed in terms of internal-reflux ratios, for example ##EQU2## in the rectifying section of a normal distillation system. Though a single pinch point is sufficient to limit the operation of any distillation system, it may be noted that all points in the rectifying section other than the pinched point, if any, could operate at lower internal-reflux ratios without becoming pinched, and all points in the stripping section other than the pinched point, if any, could operate at higher internal-reflux ratios than the limiting value without becoming pinched. This observation has prompted several patents and papers dealing with changing the internal-reflux ratio by moving heat within the phase-contacting device, generally using compressors to increase the temperature of potential heat-source vapors, in such a way that reboiler duty is reduced (Seader), (Haselden), (Mah et al.), (Freshwater). In the extreme embodiment of this approach, the internal-reflux ratio can be imagined to be at its limiting value at every point in the phase-contacting device, thus implying reversible mass transfer throughout the phase-contacting device. The change in the internal-reflux ratio brought about by introduction of an amount of heat Q, to an otherwise adiabatic section of a phase-contacting device, assuming constant molar enthalpies of liquid and vapor, is given by equation (5): ##EQU3## where L' and V' are the molar flows of liquid and vapor, respectively, at a point in the phase-contacting device incrementally distant from the point of heat addition in the direction of decreasing temperature, L" and V" are the molar flows of liquid and vapor, respectively, at a point in the phase-contacting device incrementally distant from the point of heat addition in the direction of increasing temperature, and Q may be positive or negative in sign. In addition to removal or addition of heat, the internalreflux ratio can be changed by removal or addition of mass. The change in the internal-reflux ratio accompanying introduction of a feed to a distillation system, the optimal point of feed introduction and the influence of the limiting internal-reflux ratio at the feed location on energy requirements are dealt with in any textbook on distillation (King) (Robinson & Gilliland). Consideration of the more general case of addition or removal of any stream from the phase-contacting device involves similar observations. The change in the internal-reflux ratio brought about by introduction of a stream of molar flow S consisting of a molar flow of vapor Vs and a molar flow of liquid Ls at a point in the phase-contacting region of a distillation system with the same temperature and pressure as the stream introduced, assuming constant molar ethalpies of liquid and vapor, is given by ##EQU4## where L', V', L", and V" are interpreted as for equation (5)--except with reference to the point of stream introduction instead of heat addition, and the signs of the Ls and Vs terms are reversed if a stream is removed. The fewest number of stages of separation will be required if a stream is added to the phase-contacting portion of a distillation system at a point where the liquid and vapor compositions are as close as possible to the liquid and vapor compositions of the added stream, with equality of the key components getting priority in multicomponent distillation. Furthermore, bringing the compositions of passing liquid and vapor streams in the region of stream introduction as close to equilibrium as practical via adjustment of the external-reflux ratio and the flows of streams entering the column, given the constraints of a reasonable number of stages in the phase-contacting device, allows the internal-reflux ratio to be brought closer to its limiting value in a substantial portion of the phase-contacting device, thus reducing the heat required per unit distillate. The inventive concepts herein, referred to as IHOSR distillation, involve combining the following two strategies: (1) changing the internal-reflux ratio within the phase-contacting region by adding and removing heat using compressed vapors from the distillation system as the heat source and liquid from the distillation system as the heat sink, where at least one of the liquid and vapor, and possibly both, are withdrawn from the phase-contacting region of the distillation system at a temperature intermediate between the extreme temperatures of the system; and (2) returning the flows withdrawn as liquid and/or vapor from the phase-contacting region at an intermediate temperature, now with altered phase, using the same criteria normally employed for feed introduction, that is, introducing the feed at a point such that the composition of the feed is as close as possible to the composition of the corresponding phase(s) in the phase-contacting region of the distillation system, and the internal-reflux ratio above the point of feed introduction is made as small as practially possible. An equivalent statement to making this reflux ratio as small as practically possible is to make the opposite phases at the point of feed introduction as close to equilibrium as practically possible. The net result of implementing IHOSR distillation is to reduce the quantity of heat which must be supplied per unit distillate output, while requiring a relatively small amount of compression work. In all cases, a second result is to bring the composition of passing vapor and liquid closer to the equilibrium compositions throughout most of the range of compositions in the column, thus minimizing internal irreversibilities due to mass transfer gradients. Examples of the variety of ways IHOSR distillation can be applied to distillation problems encountered in a conventional system are shown in FIGS. 1A, 2A, 3A and 4A. IHOSR distillation may also be implemented in systems with the rectifying section at a higher pressure than the stripping section. FIG. 5A shows a scheme for combining IHOSR distillation with extractive distillation; FIGS. 6A, 6B, 7, 8A and 8B show ways to combine IHOSR distillation with fermentative production of volatile compounds. The influence of the IHOSR approach in various embodiments on the internal-reflux ratio is presented graphically using the McCabe-Theile method (King) in FIGS. 1B, 1C, 2B, 3B, 3C, 4B, 4C, 5B and 5C. All the examples presented involve binary separations because the McCabe-Theile method is more nearly rigorous when applied to binary separations than to multricomponent separations, though it may be applied to both (Hengstebeck), (Jenny and Cicalese). The role of the internal-reflux ratio, the principles dictating the effect of introduction of a given amount of mass or heat on the internal-reflux ratio, and the strategies for "custom tailoring" the internal-reflux ratio to the equilibrium relationship are not significantly different for either multicomponent separations or unconventional distillation apparatus. In the McCabe-Theile diagrams discussed below, the phase change of removed streams is assumed to be total, though partial phase changes may also be used, and the small amount of vapor generated when compressed vapors are returned to the pressure of the phase-contacting region is ignored. The explanation that now follows concerns the distillation apparatus 101A, 101B, 101C and 101D in FIGS. 1A, 2A, 3A, and 4A, respectively, the respective processes being called scheme 1, scheme 2, scheme 3 and scheme 4. In the scheme shown in FIG. 1A, scheme 1, the returned condensate at 9 acts just like a second feed and may greatly relax the constraints on the internal-reflux ratio, and reduce qreb, which are present with just the original feed at 15. FIG. 1B shows the operating line 32A for the case where the side-stream is withdrawn at 33A at the feed location and returned above the feed location at 33B. FIG. 1C shows the operating line 32B for the case where the sidestream is withdrawn above the feed location at 34B. (It will be appreciated that the operating line 32A, is a composite of three linear segments with different slopes (i.e., three different L/V ratios), the line 32B consists of four linear segments with different slopes, but each can be viewed as a single operating line whose shape is modified according to the present teachings.) It is also possible to withdraw vapor below the feed location. Scheme 1 is especially useful for dilute feeds. The scheme shown in FIG. 2A, scheme 2, is similar to scheme 1 except that the heat exchange is at 6B between the feed location 15 in FIG. 2A and the reboiler 5B. Compared to scheme 1, scheme 2 has the advantage that the temperature difference between the heat source and heat sink will be smaller and the disadvantage that the column may be pinched below the point of heat addition. The discontinuity in the operating line labeled 32C (FIG. 2B) can be explained by considering equations (2) and (5). Addition of heat makes L'/V' less than L"/V" as shown by equation (5), but the mass and component balances which gave rise to (2) dictate that all operating lines in the stripping section must be rays passing through Xb, regardless of L/V for the case of constant molar enthalpies of liquid and vapor. As in scheme 1, the heat-souurce vapor may be withdrawn at the feed location, above the feed location, or below the feed location. The scheme shown in FIG. 3A, scheme 3, perhaps the most versatile of all the schemes, can be designed with a small difference between the temperatures of heat sources and sinks--regardless of the feed concentration--because both the heat source and the heat sink are withdrawn from the phase--contacting region of the distillation system and so have temperatures intermediate between the reboiler 5B and condenser 12 in FIG. 3A. In FIG. 3B, the operating line shown at 32D represents the situation in which the heat-source vapors at 3A and the heat-sink liquid at 4B in FIG. 3A are both withdrawn from the same location as the feed 15 and are, therefore, at virtually the same temperature prior to heat exchange; heat transfer occurs in a heat exchanger 5'. Liquid withdrawn at 4B by a pump 16 is evaporated and returned to the column 1 as a vapor at 13B. Some compressor work is necessary in this situation, even if an infinite area is assumed for heat transfer, because the dew point temperature of the heat-source vapor falls during condensation while the bubble-point temperature of the heat-sink liquid increases during evaporation. In general, the compressor pressure ratio required to drive heat transfer between a given heat source and heat sink can be minimized by operating the heat exchanger countercurrently and so matching condensation of the hotter vapor with evaportion of the hotter liquid and condensation of the colder vapor with evaporation of the colder liquid. FIG. 3C shows an operating line 32E for the case in FIG. 3A in which the heat source vapor and the heat sink liquid withdrawals are above and below the feed location, respectively. This arrangement is likely to be useful for multicomponent separations because simultaneous pinch regions above and below the feed plate, which is the normal case for multicomponent distillation, can be both circumvented with a single heat pump while realizing substantial savings in reboiler duty. Scheme 3 has a further advantage in that there are very few theoretical stages between the heat source (the vapor withdrawn at 3A in FIG. 3A) and heat sink (i.e., the liquid withdrawn at 4B in FIG. 3A); in fact the heat source can be withdrawn below the heat sink in some cases, and thus the pressure drop within the phase-contacting region between heat source and sink can be very small. In the scheme shown in FIG. 4A, scheme 4, the overhead vapor withdrawn at 3A is the heat source and it is the liquid withdrawn at 4B that is returned with altered phase at a different location 13B from its withdrawal point. Scheme 4 is useful in the case of either a "tangental pinch," in which case the liquid removal point is best above the feed location (see the operating line 32F in FIG. 4B), or a concentrated feed, in which case the liquid removal is best below the feed location (see operating line 32G in FIG. 4C). The IHOSR technique may be implemented in a single column where a vapor sidestream is removed, compressed and returned, as in schemes 1 through 4. The technique may also be implemented in a distillation system with the rectifying section at a higher pressure than the stripping section. When the rectifying section has a higher pressure than the stripping section, the temperature of at least portions of the rectifying section is higher than at least portions of the stripping section. This arrangement allows heat transfer between the stripping and rectifying sections without compression of sidestreams per se. In addition to moving heat in this manner, reintroduction of withdrawn streams at the preferred point in the column (see strategy 2 above) increases the power of the two-column/two-pressure system over and above that realized by previous persons (Haselden) (Seader). Schemes 1, 2 and 3 are all capable of efficiently concentrating a dilute feed. FIG. 5A, scheme 5, shows a process for producing ethanol more pure than the azeotrope from a relatively dilute feed, as is obtained from fermentation, which uses an IHOSR-enhanced column 1A to achieve partial purification, an extractive distillation column 1B employing salt as a separating agent (a variety of salts may be used, but potassium acetate is one of the better salts for the ethanol-water system (Cook et al.)), and a salt-recovery system. The initial purification provided by the IHOSR column 1A is very important to this process. The overall energy requirements of the system are very low (see example below) because of the efficiency of the IHOSR column 1A, the heat integration between the two columns 1A and 1B and the low-reflux ratio which may be used in the second column 1B due to the concentrated stream leaving column 1A at 14; salt recovery is facilitated because the feed to the second column at 15A left the IHOSR column 1A as vapor at 14 and so contains no particulates which would accumulate in the salt-recovery system, and the initial purification in the IHOSR column 1A means that the salt is diluted by the feed to the second column 1B to a relatively small degree. The operating lines shown at 32H and 32I in FIGS. 5B and 5C represent vapor and liquid composition in the columns 1A and 1B, respectively, in FIG. 5A. The equilibrium curve in FIG. 5C is for the case of 12.5 mole % potassium acetate as reported by Cook and Furter and has no azeotropic point. The distillation apparatus 101E in FIG. 5A, in addition to the elements expressly addressed previously herein, includes reboilers for the two columns 5C and 5E. In the apparatus 101E, it is important to balance the heat available from the vapors entering the reboiler 5E and the heat needed to generate the required vapor flows in column 1B. This heat balance can usually be achieved by proper selection of the vapor withdrawal point 3A. However, a second heat pump such as indicated by the dashed lines in FIG. 5A may be included. The salt-recovery system noted above includes a pump 16, an evaporator 5D and a spray dryer 17, which produces crystalized salt in steam 18 which is added to the reflux stream 19 of column 1B. Vapor from the evaporator 5D, can be introduced directly to the stream of stripping vapors 13A entering column 1A, thereby decreasing the amount of heat which must be added to the reboiler 5C from external sources. A variety of volatile compounds can be produced from a suitable substrate via fermentation. The best known examples of fermentations yielding volatile compounds are the ethanol fermentation carried out by species of yeast and the acetone/butanol/ethanol fermentation carried out by some bacteria of the genus Clostridium. A common feature of fermentative production of volatile compounds is the inhibition of the rate of fermentation by the fermentation products. In systems in which volatile products are produced continuously, continuous removal of these products can keep their concentration low and so free the resident organisms from end-product inhibition, allowing higher fermentation rates and smaller fermentors. In addition, the continuous removal of volatile products in a vapor stream may allow the metabolic heat generated in the course of the fermentation to be utilized; the amount of this heat can be significant (e.g., 50 percent or more) relative to the heat requirement of the distillation system. The benefits of removing end-product inhibition and recovering metabolic heat can be realized by integrating fermentation and IHOSR distillation. FIGS. 6A and 6B show apparatus 101F and 101G in which vapor is removed from a fermentor 21A operated at a pressure such that the fermentation broth boils at a temperature compatible with the requirements of the resident organisms. Each system is a variation on scheme 1, where the points of vapor removal at 3A and heat return (by a heat exchange coil 6B) are essentially at the same temperature. The liquid present after the vapor is condensed in the coil 6B, which essentially becomes the column reflux when it is reintroduced at 9A, has the same composition as the vapor in equilibrium with the liquid in the fermentor and is considerably enriched relative to the fermentor liquid. In FIG. 6A this liquid is stripped in a column 1C which gives rise to a distillate vapor which may be essentially in equilibrium with the liquid feed at 9A, and so richer than the fermentor broth by the equivalent of two equilibrium stages at total reflux. In FIG. 6B further purification is achieved using the IHOSR return stream introduced at 9 as the column feed. The input at 15A to the fermentors 21A in FIGS. 6A, 6B (and also to fermentors 21B in FIGS. 8A and 8B) is the liquid feed containing nutrients and substrate, except for the case of a gaseous substrate, needed in the fermentation process; the level 22 indicates liquid level in all the fermentors. FIG. 7 shows a variation on the arrangements in FIGS. 6A and 6B in that heat is pumped from the fermentor 21B in FIG. 7 to a reboiler 5F, for a column 1C to strip the effluent from the fermentor; vapor from the column 1C (which strips the liquidfermentor effluent) is introduced at 23 into the fermentor directly. In FIGS. 8A and 8B, only a liquid stream is withdrawn from the fermentor 21B; vapor is withdrawn from a column 1 adjoining the fermentor 21B, and the IHOSR technique is implemented in this column. A portion of the stripped liquid effluent leaving the distillation system (i.e., from a reboiler 5G in FIGS. 8A and 8B) is returned to the fermentor 24 so that cells and the substrate remains in the system for a length of time sufficient to allow the desired conversion to be achieved. FIG. 8B shows the same system as shown in FIG. 8A with countercurrent heat exchange at 5H between the fermentor effluent and stripped recycle stream. The scheme shown in FIG. 8B is especially attractive in that neither the fermentor nor distillation system need be operated at reduced pressure. The fermentor 21B can be operated at a pressure such that the broth temperature is far below its bubble point, and the column 1 in FIG. 8B can be operated at a temperature far in excess of that which could be tolerated by the organisms in the fermentor, providing that some means of preventing the organisms from entering the distillation system, such as centrifugation, filtration, immobilization, or a floculating strain, is employed. Table 1 below displays calculated values for the heat and work requirements, the number of stages and the external-reflux ratios for several example separations using IHOSR distillation and, where possible, is compared to conventional distillation with an adiabatic column. Separation 1 involves separating a dilute ethanol-water mixture with ethanol (Xf)=0.0039 (˜1% by weight), to a distillate with mole fraction (Xd)=0.8 (˜91% by weight), and a bottoms with mole fraction (Xb)=0.000039. Separation 2 involves separating a concentrated ethanol-water mixture with Xf =0.1 (˜22% by weight) to a distillate with Xd =0.8814 (˜95% by weight) and a bottoms with Xb =0.001. Separation 3 involves separating an ethanol-water mixture of intermediate concentration with Xf =0.02437 (˜6% by weight) to a distillate with Xd =0.9748 (˜99% by weight) and a bottoms with Xb =0.0002437. For separation 1, IHOSR scheme 1 will be employed, for separation 2, IHOSR scheme 4 will be employed, for separation 3, the two-tower system (shown in FIG. 5A) using salt as a separating agent, in this case potassium acetate at 12.5 mole %, will be used. For separation 1, using IHOSR distillation reduces the heat requirement from 59,494 BTU/gal to 7,398 BTU/gal compared to the conventional case, while using 1,917 BTU/gal of work. This reduction is made possible by alleviating the limitation of the external-reflux ratio imposed by the low-feed concentration in the conventional case. The number of stages is increased from thirty-five for the conventional case to forty-eight for the IHOSR case because passing streams are brought closer to equilibrium by the IHOSR technique. The increase in the number of stages is relatively small because most of the stages are required for stripping the dilute feed in both cases. The external-reflux ratio is lowered from 19.39 for the conventional case to 2.19 for the IHOSR case. For separation 1 and the operating parameters listed in Table 1, the molar ratio of vapor removed from the column and compressed to the flow of distillate, S/D, is 17.2. If less material were removed in relation to the distillate, then the external-reflux ratio would have to be higher and the reduction of the heat duty would be smaller than indicated in Table 1; if more material were removed, then the column would become pinched to a greater extent at the point of reintroduction of the sidestream, tending toward an infinite stage requirement. The average ΔT for heat transfer from the compressed vapor to the reboiler, was the minimum possible while still having the last drop of vapor condense at 0.5° C. hotter than the reboiler temperature. During the course of condensation the temperature of the ethanol-water mixture fell by over 10° C. The withdrawn vapor had to be compressed to 1.43 atmospheres to achieve the indicated ΔT. For separation 2, using IHOSR distillation reduces the heat requirement from 19,458 BTU/gal to 7,018 BTU/gal compared to the conventional case, while using 376 BTU/gal work. This reduction is made possible by using a heat pump from the overhead vapor to bring about a high internal-reflux ratio where it is needed, near the distillate composition, while having a lower internal-reflux ratio elsewhere. In the calculations used to generate Table 1, the external-reflux ratio for separation 2 with conventional distillation was set at 1.25 times the minimum reflux, external-reflux ratio, a conventional value. For the IHOSR case, the external-reflux ratio was allowed to be somewhat higher than 1.25 times the minimum to avoid the "tangental pinch" near the distillate composition. Because of this choice, the stage requirement for the IHOSR case, sixty, is lower than for the conventional case, sixty-five. S/D is 5.43. If S/D were made higher, then fewer stages would be required, the internal-reflux ratio near the distillate composition would increase, and the work requirement would increase; there would be no effect on the heat requirement as long as the internal-reflux ratio below the region effected by the heat pump were kept constant. If S/D were made lower, then more stages would be required, and the internal-reflux ratio near the distillate composition would decrease until the column became completely pinched. The ΔT of 5.1° C. for heat transfer from the compressed overhead vapor to the withdrawn column liquid was arbitrarily selected. Lower values than 5.1 are possible because the temperature of the heat-source vapor and heat-sink liquid remain relatively constant during reciprocal phase change for this case. The withdrawn vapor had to be compressed to 1.3 atmospheres to achieve the indicated ΔT. For separation 3, 4,561 BTU heat/gal and 592 BTU work/gal are required to enrich a 6 wt. % ethanol feed to a 99 wt. % ethanol distillate. These energy requirements may be compared with a value of roughly 27,000 BTU heat/gal for the separation of a ten weight % feed using azeotropic distillation and adiabatic distillation columns (Busche). The total number of stages required by both the high- and low-pressure columns is twenty-nine, a low value because significant pinch regions are never encountered. The value of S/D in the high-pressure column is 3.36. If S/D were greater, the column would be pinched near the composition of the leaving vapor; if S/D were smaller, the composition of the leaving vapor would not be as great but fewer stages would be required. In this example, vapor is removed at the feed plate of the high-pressure column, and the vapor flow leaving this column is just sufficient to generate the required vapor flow in the low-pressure column. For a different feed composition, a balance between the vapor leaving the first column and the vapor requirement of the second column could be achieved either by moving the point of vapor withdrawn to a location other than the feed plate, or introducing a second heat pump operating between the vapor leaving the high-pressure column and its reboiler. The majority of the heat used for salt separation, about 95 percent for the case under consideration, can be recovered by evaporating the salt and using the vapor thereby generated in the high-pressure column in lieu of vapor supplied by an external heat source. The heat necessary to dry the saturated salt solution leaving the evaporator to produce solid salt is very small because of the high solubility of the salt in the completely stripped bottoms and the low-flow rate of the bottoms relative to the distillate. As in separation 1, the indicated ΔT, 7.1° C., is the minimum value possible while maintaining the heat source at least 0.5° C. hotter than the heat sink. The withdrawn vapor had be to compressed to a pressure of 1.91 atmospheres to achieve the indicated ΔT. Integrating fermentation and distillation via removal of a volatile product from a continuous fermentor potentially offers increased fermentor productivity, due to alleviation of inhibition of the rate of fermentation by the fermentative end-products and increasing the residence time of the cells and substrate in the fermentor relative to that based on the feed rate, and decreased distillation heat requirements, due to utilization of the metabolic heat released during fermentation. The increased residence time of the cells results from the fact that a portion of the liquid entering the fermentor leaves in a stream enriched in volatile product which contains neither cells nor substrate (see FIGS. 6A, 6B, 7, 8A and 8B). The productivty of a fermentor (mass product/(fermentor volume*time)) can be expressed as the product of the cell concentration, X (mass cells/volume), and the specific product production rate, q, (mass product/(mass cell*time)). A mass balance on substrate gives the cell concentration in terms of the cell yield, Yx (mass cells made/mass substrate consumed), the entering and leaving substrate concentrations So and S, respectively, (mass/volume), and the ratio of the flow of cell-containing effluent to influent f (unitless) X=Y.sub.x *(S.sub.o -f*S)/f (7) The expression for q will depend on the specific fermentation considered. For the ethanol fermentation via yeast at constant substrate concentration, q is related to the ethanol concentration, P, in a roughly linear fashion (different studies are not in complete agreement, see Ghose and Tyagi) q˜(1-(P/P')) (8) where P' is an empirical constant. As an example of the increase in fermentor productivity by continuous product removal, consider two hypothetical continuous fermentors producing ethanol from a 13 wt. % glucose solution via yeast fermentation with both fermentors achieving 99 percent substrate utilization. Fermentor 1 is maintained at 1 wt. % ethanol via removal of a vapor stream of 11 wt. % ethanol, with 52 percent of the mass entering the fermentor leaving in this enriched stream, or f=0.48. Fermentor 2 is a continuous fermentor without continuous product removal, f=1, save in the liquid-fermentor effluent at the same concentration as the liquid in the fermentor. At 99 percent substrate utilization the ethanol concentration in fermentor 2 is 59.2 g/l. Applying equations (7) and (8) to fermentors 1 and 2 with Yx assumed constant at 0.1, indicates that the productivity in fermentor 1 would be 2.1 times higher than the productivity in fermentor 2 due to the increased cell concentration, and an additional factor of 2.2 times higher due to alleviation of ethanol inhibition. Thus the productivity is 4.6 times higher with continuous ethanol removal than without it. More dramatic increases are found for higher substrate concentration. If the small effects of the CO2 given off during the fermentation on the compression work requirement, and of operating at reduced pressure where necessary on the vapor-liquid equilibria, are neglected, the energy requirement for separating a 1 percent ethanol stream continuously removed from a fermentor are the same as in Table 1 for separation 1. Comparison of the energy requirements for separating 1 percent ethanol by IHOSR and conventional distillation, and for separating 6 percent ethanol, a more typical concentration for the production of ethanol, by conventional distillation, demonstrates that operation at 1 percent becomes much more attractive when IHOSR distillation is used as opposed to conventional distillation. Generally speaking, IHOSR distillation may allow more product-sensitive organisms to be used than hitherto practical because it lowers the energy required to separate dilute solutions. For such product-sensitive organisms, continuous product removal is especially important because the productivities possible without it are very low. Microcalirometric studies have found that considerable amounts of heat are liberated during fermentation. For the case of the ethanol fermentation, approximately 30 Kcal/mol glucose are released (Fardeau et al.). This quantity corresponds to roughly 4,000 BTU/gal ethanol, depending slightly on the distillate composition. Examination of the heat requirement for the separations in Table 1 via IHOSR distillation demonstrates that the quantity of heat available from fermentation is a very significant fraction of the heat required for separation, 54 percent, 57 percent, and 88 percent for separations 1, 2, and 3, respectively. Metabolic heat recovery is most easily accomplished in situations where vapor is removed directly from the fermentor as in FIGS. 6A, 6B and 7. TABLE 1__________________________________________________________________________Energy requirements and operating parameters for illustrative separationsby IHOSR and conventional distillation. HEAT WORK NUMBER SIDESTREAM BTU/ BTU/ OF EXTERNAL- SIDE- PRESSURE GAL GAL THEO- REFLUX STREAM AFTER AVERAGE ΔT TOSEPARATION DISTIL- DISTIL- RETICAL RATIO (L/D) RATIO COMPRESSION DRIVE HEATSTRATEGY LATE LATE STAGES (L/D) min (S/D)* (atm) TRANSFER__________________________________________________________________________ (°C.) Conventional 59,494 -- 35 19.39 17.35 -- -- -- IHOSR #1 7,398 1,917 48 2.19 -- 17.20 1.43 7.0 Conventional 19,548 -- 65 6.17 4.94 -- -- -- IHOSR #4 7,018 376 60 6.92 -- 5.43 1.30 5.1 IHOSR #1 + 4,561 592 29 1.00 -- 3.36 1.91 7.1 multieffect + extractive__________________________________________________________________________ *The sidestream ratio is the molar ratio of the flow of vapor removed and compressed to the distillate flow. Basis of Calculations: Constant molar enthalpy of all liquid and vapor streams is assumed regardless of pressure or composition. All vaporliquid equilibrium data and activity coefficients used are from published sources and were measured at 90° C. Compression work is calculated assuming 75 percent isentropic efficiency and ideal gas behavior. The small amount of vapor generated when the condensate derived from the heatsource vapor is returned to the column pressure is neglected. The saltcontaining bottoms of the lowpressure column in separation 3 is evaporated to saturation with complete recovery of the steam produced in the reboiler of the highpressure column, the heat required to dry the saturated solution is calculated assuming a 75 percent dryer efficiency and is not assumed to be recovered. The recovery of ethanol from fermentation broths is responsible for a significant portion of the cost, and often the largest share of the energy requirements for yeast-based ethanol production. For processes where substrate concentration, fermentor design or limitations of the fermentation agent require lower ethanol concentrations than those that can be produced by yeast, the incentive to develop energy-efficient alcohol recovery technologies is particularly great. Thermophilic bacteria represent a prominent example of alcohol-producing organisms which have potential advantages in comparison to yeast, but produce ethanol at relatively low concentrations. FIG. 9 shows a flowsheet representation of an alternative process, referred to herein as Alternative IHOSR Process. Two distillation columns that comprise the IHOSR section (columns C1 and C2), can be conceptually treated as one single column; a dilute ethanol solution (stream 1) is fed at the same stage where a portion of the vapor stream is taken out (stream 3). This vapor stream is sent to a compressor (VB1), where it is converted into superheated vapor at a higher pressure; the superheated stream (number 16) is then condensed in the column reboiler R1, and returned as reflux to the top of column C2 (stream 17). This condensation provides a substantial portion of the heat required by the reboiler. The bottoms of the IHOSR column contains essentially pure water. The vapor distillate of the IHOSR section is condensed and fed as a saturated liquid into the extractive column C3, operated at a higher pressure, where extractive distillation using, for example, potassium acetate, takes place. Potassium acetate, a non-volatile component, leaves with essentially water only at the bottoms of the column (stream 7). It is later concentrated in evaporator E1, and dried in drum dryer D1, to be recirculated in solution with ethanol as reflux (stream 14). The column overhead stream, containing 99% weight ethanol, is condensed to provide the evaporator heat duty and the remaining part of the IHOSR reboiler duty. Thus, the energy consumption of the process is essentially a result of the heat duty of the extractive column reboiler R2 and the power requirements of the comprssor VB1. The configuration of the Alternative IHOSR Process described above is an alternative to the previously presented version, in which the IHOSR column was operated at high pressure, using the heat of condensation of the vapor product to drive the extractive column. A computer model to simulate the steady-state behavior of the Alternative IHOSR Process was developed, with the following objectives: 1. To facilitate the rapid estimation of equipment sizes and energy flows, for different feed concentrations, product qualities and ethanol production volumes. 2. To investigate the sensitivity of economic projections to ethanol concentration in the feed, assumed steam price and some economic parameters. The state-of-the-art flowsheet simulation package Aspen Plus ™ was chosen as simulation tool, because of its extensive simulation and costing capabilities. Aspen Plus is a product of Aspen Technology, Inc. 251 Vassar Street, Cambridge, MA 02139. The major source of uncertainty in the development of the computer model was the thermodynamic behavior of the ethanol-water-potassium acetate solutions. Although this particular mixture has been the subject of two previous pilot plant studies, a reliable thermodynamic model for the vapor-liquid equilibrium behavior of the solutions was not available. A new activity coefficient model, the NRS model, which is able to represent the behavior of the solutions with good accuracy, has been developed. See Torres, J. L., "Computer Modeling and Evaluation of an Energy-Efficient Distillation Process", D. Eng. Thesis, Thayer School of Engineering, Dartmouth College, Hanover, NH (1988). The NRS model is proposed as an extension of the UNIQUAC activity coefficient model, by the inclusion of a third contribution term, the salt contribution, to express the activity coefficient of solvent i as follows: γ.sub.i =γ.sub.i.sup.C γ.sub.i.sup.R γ.sub.i.sup.S( 1) The combinatorial and residual contribution terms to the activity coefficient, γi C and γi R, are calculated using the standard UNIQUAC binary parameters for ethanol and water. An important advantage of this approach is the availability of extensive tables of binary UNIQUAC parameters, which can be used by the NRS model without further regression. The salt contribution term, γi S, is calculated by an empirical expression based on a transformation of the three concentration variables in the solution. The molar concentration of solvent i is transformed to the salt-free mole fraction, as: ##EQU5## The salt molar concentration is transformed in terms of the maximum salt mole fraction that can be attained in the particular ethanol-water binary mixture at a pressue of one atmosphere, defining the normal relative saturation as: ##EQU6## Note that with the transformations described above, the composition of any liquid mixture of ethanol, water and a salt is uniquely determined by an (X1, ξ) pair. The range of possible values at atmospheric pressure is identical for both variables, 0.0≦X1 ≦1.0 and 0.0≦ξ≦1.0. The development of the NRS model is discussed elsewhere. See Torres, J. L., "Computer Modeling and Evaluation of an Energy-Efficient Distillation Process", D. Eng. Thesis, Thayer School of Engineering, Dartmouth College, Hanover, NH (1988). The expression for the salt contribution term by the model is: 1nγ.sub.i.sup.S =ξ[k.sub.oi +k.sub.1i ξ+k.sub.2i X.sub.i ](4) The regression of the parameters required by the NRS model can be facilitated by defining an auxiliary variable: Γ=1nγ.sub.i.sup.S /ξ (5) This auxiliary variable Γ is a linear function of Xi and ξ, and thus allows the use of simple least squares fitting for regression of koi, k1i and k2i. An important advantage of the NRS model is its ability to represent equilibrium curves at constant salt mole fraction, using parameters obtained from experimental data measured at the boundaries of the feasible region. These boundaries are: ξ=0 (salt-free binary), X1 or X2 =0 (signal solvent-salt solutions) and ξ=1.0 (boiling liquid mixtures in equilibrium with a solid salt phase). The experimental measurements taken at these boundaries require relatively simple laboratory procedures. The evaluation of the computer model for the vapor-liquid equilibrium behavior of the ethanol-water-salt mixtures required the regression of parameters from the experiment data reported by Schmitt, using a multi-variable linear regression program, and the calculation of a complete equilibrium curve, using a conventional bubble-point program. See Schmitt, D., "The Effect of Salt on the Vapor-Liquid Equilibrium of Binary Solutions and on the Distillation of Azeotropic Mixtures", Dr. Ing. Thesis, University of Karlsruhe, West Germany (1979). In order to incorporate the NRS model into an Aspen simulation of the Alternative IHOSR Process, a constant potassium acetate concentration in the liquid phase was assumed for each section of the column. This assumption, confirmed in the actual distillation runs reported by Schmitt, allows the use of pseudo-binary distillation in the extractive column, where the presence of potassium acetate was not explicitly included. See Schmitt, D., "The Effect of Salt on the Vapor-Liquid Equilibrium of Binary Solutions and on the Distillation of Azeotropic Mixtures", Dr. Ing. Thesis, University of Karlsruhe, West Germany (1979); and Schmitt, D. and A. Vogelpohl, "Distillation of Ethanol-Water Solutions in the Presence of Potassium Acetate", Separation Science and Technology 18(6):547 Marcel Dekker (1983). The vapor-liquid equilibrium behavior of this pseudo-binary for the entire range of salt-free compositions was calculated using the NRS model, in a separate bubble-point program. The equilibrium distribution variables ("K-values") for ethanol and water obtained from the program were then incorporated into Aspen in tabular form with temperature; the simulation program uses non-linear interpolation to calculate intermediate values. The enthalpies for the liquid streams were corrected to account for the presence of the salt. The Alternative IHOSR Process was simulated in Aspen Plus™, following the conceptual block diagram shown in FIG. 10. The various blocks presented include the corresponding Aspen unit operation models used. The pseudo-binary approach used in the case of the extractive column required two separate distillation column models. The "water-salt section" block, which includes the evaporator E1, the dryer D1 and other equipment, was calculated manually, due to the lack of suitable models in the simulation package. It is possible to evaluate the economic advantages of the Alternative IHOSR Process and any other alternative processes for the separation of ethanol from dilute mixtures in water, by the use of a "cost-plus-return" approach, which incorporates the initial capital investment and the operating costs of each alternative into a single value. In this paper, the comparison is based on the Separation Cost Index, calculated by the following procedure: a) The Aspen simulation runs report an "installed cost", or Direct Plant Investment (DPI). This corresponds to the equipment purchase costs (calculated for December 1987), plus the installation materials (calculated in Aspen with built-in multipliers) and installation labor costs (using a weighted average wage rate of $25/hour). b) The Allocated Power, General and Services Facilities investment was calculated using the following factors: For steam generation and handling, $30 per 1000 pounds of steam used per hour. For electrical power usage, $200 per kW used. For general facilities, 10% of the sum of Direct Plant Investment and Allocated Facilities. The allocated costs associated with process and cooling water usage or waste disposal were assumed to be relatively minor and also essentially constant for the various alternatives; therefore, they were not included in these calculations. c) The Total Plant Investment (TPI) was then calculated as the sum of the direct plant investment and the allocated facilities investment. In order to incorporate both the initial and the operating costs in one single sum, the capital investment was included in the annual costs for each alternative process, by the addition of a statutory return on investment before taxes expressed as a percentage of TPI, as specified in each case. b) The following operating costs were also calculated: Energy costs, reported directly in the simulation results (steam at a fixed price per 1000 lbs, and electrical power, at $0.06 KWh; the effect of various steam prices was separately studied). Direct operating labor, estimated as 0.35 manhours/ton of product. For a plant production of 25 MMgal/year, this translates to 26000 manhours, and using a wage rate of $20/manhour, results in a fixed operating cost of $520,000/year. Note that this cost will be the same for all alternatives, since it is a function only of total production rate. To this direct operating labor were added: the cost for supervisory labor (18% of the direct operating labor), the cost for labor supplies and services (6% of d.o.l.), and the cost of plant overhead including administration (50% of d.o.l.) for a total of $436,800/year, again constant for all alternatives. c) The following capital-related annual costs were also added: Maintenance and repairs, estimated to be 3% of TPI. Depreciation, estimated to be 8% of DPI, plus 6% of allocated costs. Property taxes, estimated to be 2% of TPI. Insurance, estimated to be 1% of TPI. The sum of the above contributions was defined to be the Total Separation Cost (TSC). Finally, a Separation Cost Index was defined in terms of added cost per gallon of ethanol produced, by dividing the resulting Total Separation Cost ($/year) by the plant production (gal/year). The predictive accuracy of the NRS model for the ethanol-water-potassium acetate represents a significant improvement over previously available models. In comparison to the experimental data reported by Schmitt, obtained at about the maximum constant salt mole fraction achievable at atmospheric pressure, the NRS model produced average deviations in the vapor phase composition of 0.0074 mole fraction units, and in the bubble-point temperature of 1.48 K. The vapor composition is shown in FIG. 11, for two different salt concentrations and 3-phase saturation. The parameter values used in this comparison were obtained from boundary data, which did not include the experimental points shown in the two lower curves. b) Process simulation. FIG. 12 shows the temperature and vapor composition profiles calculated by the simulation for the three distillation columns in the process, for the 1.5% weight ethanol feed. The observed discontinuity in the temperature profile of the extractive column at the feed tray is the result of a change in the salt molar concentration in the liquid, caused by the introduction of the salt-free feed stream. The Aspen-based computer simulation of the Alternative IHOSR Process also allowed the rapid calculation of capital and operating costs for several cases, which were then evaluated by the procedure outlined above. The design strategy of the Alternative IHOSR Process involved a trade-off between capital and energy costs to minimize energy usage, because the separation of ethanol from very dilute mixtures can be a highly energy-intensive operation. FIG. 13 shows the relative contributions of various costs to the Separation Cost Index of the Alternative IHOSR Process, for a feed concentration of 1.5% weight ethanol. The values shown were calculated using a statutory ROI of 20%, with steam priced at $5.00/1000 lbs. Total yearly production of ethanol was set at 25×106 gallons. An economic comparison with a more conventional extractive distillation with benzene was also carried out. An annual ethanol production of 25 million gallons was chosen for the comparison, and an Aspen computer model was developed for the concentrating section of the conventional process; the equipment and operating costs of the azeotropic distillation section were reported by Chem Systems. See Chem Systems, Inc., "Economic Feasibility of an Enzymatic Hydrolysis Based Ethanol Plant with Prehydrolysis Pretreatment", SERI Subcontract No. XX-3-03097-2, New York, 1984. FIG. 14 shows the comparison for the case of a statutory ROI rate of 20% and an assumed steam cost of $5.00/1000 lbs of steam, for ethanol concentrations in the feed in the 1.5%-20% range. It can be observed that the Alternative IHOSR Process has a lower SCI for the entire concentration range shown; it also enjoys a significant economic advantage at the dilute end. FIG. 15 shows the energy requirements corresponding to the points plotted in FIG. 14. It may be seen that the total cost curve follows the general shape of the curve of energy requirements, which indicates that these costs (and not initial capital investment) may be the most important factor to be considered in economic comparisons between alterantive ethanol separation processes. This consideration is inhereent in the design strategy of the Alternative IHOSR Process. A substantial reduction in the energy requirements of the Alternative IHOSR Process is accomplished by a higher capital investment. Using reasonable values for the statutory ROI and mediumpressure steam price, the Alternative IHOSR Process offers substantial economic advantages. Conversely, it is possible to calculate the values of comprison parameters wthat would have to be used, in order to make the Alternative IHOSR Process and a conventional benzene azeotropic distillation equally attractive. FIG. 16 shows two of such comparisons: (a) the required statutory rate of return on investment, using a steam price of $5.00/1000 lbs, and (b) the required steam price, using a statutory ROI of 20%. The numerical value of these variables that match the SCI of both processes depend on the ethanol concentration in the feed; however, for the dilute concentrations typical of fermentation broths, very unrealistic values for ROI and/or steam prices are required for the conventional process to be equally attractive. In summary, the economic comparisons undertaken lead us to believe that the Alternative IHOSR Process can potentially contribute to the economic feasibility of ethanol production from cellulosic substrates. Work is under way at the Thayer School, to extend these economic comparisons to include other alternative processes for the separation of ethanol and water. Further modification of the invention herein disclosed will occur to persons skilled in the art and all such modifications are deemed to be within the scope of the invention as defined by the appended claims. Claims (10) Priority Applications (2) Applications Claiming Priority (1) Related Parent Applications (1) Publications (1) Family ID=27044912 Family Applications (1) Country Status (1) Cited By (162) Citations (25) - 1990 - 1990-02-06 US US07/475,732 patent/US5124004A/en not_active Expired - Fee Related
https://patents.google.com/patent/US5124004A/en
CC-MAIN-2019-39
en
refinedweb
The QMailFilterMessageSet class represents a set of messages selected by a pre-determined filter criteria. More... #include <QMailFilterMessageSet> This class is under development and is subject to change. Inherits QMailMessageSet. The QMailFilterMessageSet class represents a set of messages selected by a pre-determined filter criteria. QMailFilterMessageSet provides a representation for a named subset of messages, specified by a set of criteria encoded into a QMailMessageKey object. The properties of the QMailFilterMessageSet are mutable and can be changed after construction. Constructs a QMailFilterMessageSet within the parent container container, named name, whose message set is specified by the filter key, and with update minimization set to minimalUpdates. See also setUpdatesMinimized(). Returns the name of this message set for display purposes. Reimplemented from QMailMessageSet. See also setDisplayName(). Returns the QMailMessageKey that selects the messages represented by this message set. Reimplemented from QMailMessageSet. See also setMessageKey(). Sets the name of this message set for display purposes to name. See also displayName(). Sets the QMailMessageKey that selects the messages represented by this message set to key. See also messageKey(). Sets update minimization to set. If update minimization is set to true, the QMailFilterMessageSet will only emit the update() signal when the list of messages matching the filter key actually changes. If update minimization is false, the update() signal will also be spuriously emitted; depending on the handling of that signal, this strategy may consume significantly less resources than are required to ensure minimal updates are emitted. See also updatesMinimized(). Returns true if this message set has update minimization enabled; otherwise returns false; See also setUpdatesMinimized().
https://doc.qt.io/archives/qtextended4.4/qmailfiltermessageset.html
CC-MAIN-2019-39
en
refinedweb
About this talk Is a SpecBDD tool the same as a TDD tool, or something quite different? This talk will answer these questions, and show how PhpSpec can be integrated into your development workflow to drive quality in your Object Oriented design. Transcript - Thanks for the invitation to speak here, it's nice to go to an event in the midlands. I'm Ciaran, I'm from Starbridge. Has anyone used PhpSpec? Good, because this is an introduction talk. So those two might be bored. Is anyone doing test driven development? - Ish. - Ish. It's TDD but write the test afterwards. My country. -combination. - Yeah. So I'm gonna talk about this tool I maintain called PhpSpec. Try and talk about what it's for, and we'll touch on some subjects to do with TDD and BDD. And a bit about the TDD cycle. I found when I'm showing people this tool, it's most effective when we're sort of pair programming, so I try to make this talk feel like interactive talk. So there's some code examples, we're gonna work through something as in a TDD cycle and see how the tool supports that. So first off if you read about PhpSpec, you'll see it's referred to as a BDD tool. Which is related to TDD and it's worth talking about the differences between them. So BDD, if you ask Dan North who came up with the term what BDD is, it's a second generation outside-in-pull based multiple stakeholder, I couldn't fit it on one slide, multiple-scale, high automation, agile methodology. Which is actually a very good definition of BDD, but it's maybe a bit complicated. All those things are true. Liz Keogh came up with a better definition of behaviour driven development, she said it's when you use examples in a conversation to illustrate behaviour. So that's quite a natural thing, when you're trying to talk about what the system should do, a really easy way to help people understand what the system should is to just give an example, in this case this is what should happen. And BDD, really if you reduce it a lot is about deliberately introducing examples into conversations and deliberately saying as I'm explaining how the system behaves, I'm gonna do that by giving examples of what it will do in different situations. Cool. This is, turns out, it's kind of the same thing as TDD. So it came from TDD, that's important to talk about. So people like Kent Beck came up with test driven development, and I think, was it Kent Beck or Robert Martin came up with the rules of TDD? And it's pretty simple, there's three steps in test driven development, you start by writing a test that fails. Anyone familiar with that? Anyone doing that? Some people. That's the hard bit, you start by writing a test that later will tell you wasn't the correct code. And then after you've written the code, in some automated way you find out if it passed the test. Dan North was in a kind of coaching role, training role, trying to get people up to speed on this TDD thing, TDD is very successful. It changed my life as a developer so you should give it some time. And Dan's trying to teach people this, you start by writing a test that fails and then you write code that passes the test and then once it's passing, once it's passing you then take time to think how you're going to make it better without breaking it. Which is what we call refactoring. If you say you're doing refactoring but you don't have tests you're not doing refactoring. Myron Fowler wrote a book called Refactoring where he defined what refactoring was in Chapter Two, he's like you have to have tests, everyone ignores that. You need to know you're not breaking as you're making these changes, you need to have this safety net, it works, and then I make a little change and it still works and anywhere you really get that confidence is due to having a testing tool telling you everything's still fine. So this works, it works really effectively. Makes you a better developer, blah, blah, blah. So when Dan was teaching this, there was a sort of stumbling block which is that you say to people you have to write to test first. And like I saw a few faces when I said that, it doesn't sound right, the word doesn't sound right. It doesn't sound like a natural thing to do, write a test first. Because in life, we go to school and we take a test. Or in a factory you build something and then you test it, so it feels like test is the thing that happens later. And Dan was really kind of aware of this stuff, he's into neurolinguistic programing, things like that, so he changed the words, and this was the start of behaviour driven development. So BDD, they're doing the same cycle, you're doing the same things but you're calling them different things. And so maybe you're thinking about them differently. So the biggest shift in BDD is that you don't think of it as writing a test first, you think about it as specifying it first. So the idea of a BDD spec and a TDD test are really aligned concepts. In behaviour driven development, before you write the code, you describe using examples what the behaviour of that piece of code will be, and we'll see an example. And that feels more naturally, before I start, I'm gonna think about what it should do and I'm going to describe it. That description is gonna be some code that I type in to a testing framework, but I'm thinking of it as a description of the way the system's going to behave. And then I implement that behaviour and then I make it better so that's BDD. So effectively the thing you're doing is the same as TDD, but you're thinking about it a bit differently, you're thinking more about, instead of thinking about test, which is maybe less natural to think about afterwards, I'm thinking about it as a specification which is natural to think about as something you do first. It's natural to think of the specification as my starting point. Even if it's a quick specification for the next bit I'm doing, because we don't do waterfall projects anymore, right? Right? - Right, yeah. - Yeah. - Yeah. - So you're not specifying the entire thing and then trying to write the code, you're saying well what's the next thing I need to do? I should maybe think what's the next behaviour and write capture that somehow in a way that's going to act as a test later. So to think about the landscape of testing tools in php, why bother having PhpSpec? Because there's a testing tool that everyone uses anyway. But we can kind of plot them on different axes, so, the Y axis here is, what level are you applying your tests? Are you applying your tests one object at a time? Down at the bottom here, or are you exercising your entire system as part of the test? Because we can write a test that just instantiates the class and does stuff with it, or you can write a test that kind of deploys your system to a bunch of containers and hooks them all up together and then does web requests and provides a test database, that's the entire system. And the other axis is, are they more about testing? Or more sort of BDDish tools about describing? Based off, the way the tools API are expressed, are they about tests? Or are they trying to be descriptive and be about specs? And this isn't the official logo for PHPUnit, there is no official logo,told me of, there's no official logo for PHPUnit yet. But someone did this logo and it's really popular, so everyone thinks it's the real logo. So I'm too lazy to change my slide. I started using PHPUnit around 2005, completely changed the way I approach code. Not overnight, took me a few years to learn how to do it, but, you can use PHPUnit for testing your classes, you can also use it for exercising your whole system by driving something like Selenium, you can use it for all the in between levels of testing. So if you want to learn a testing tool that you can use at all these different layers, learn PHPUnit. And to be honest, if you're a PHP developer and you're gonna work on projects, you should probably learn PHPUnit. Because 90% of projects, at least, that do testing are using PHPUnit. Because it has to address all these different types of testing, PHPUnit's got loads of utilities for doing stuff like loading database fixtures before the test, or that's part of DBUnit but it's kind of integrated with PHPUnit. Stuff like partial knocking because as an author, or as maintainers, you don't really know what kind of testing you're gonna use it for. When it was written, it was written, wow, 15 years ago? Which is amazing for a piece of software to still be in use, and still be really good, and have new versions with new features. It's about testing, so I would say most people use PHPUnit using it to test stuff after they write the code. Which isn't a bad thing, sometimes that's what you have to do. So on this side of the axis, there's some other tools, I use behat a lot. Behat's mostly aimed, behat's not mostly aimed at exercising your entire system, although that's what most people use it for. Say it this middle ground of exercising your systems application layer and driving those tests from business use cases, this isn't a behat talk though. And behat's very much a BDD tool, it's all about having a conversation with someone about how the system should work ahead of time, and then exercising some parts of the system to check it, can fulfil those use cases that the user needs the system to be able to do. And PhpSpec's down this end, PhpSpec is just for testing classes. Individually, in isolation. And that gives us some focus. While PHPUnit doesn't know what kind of test you're gonna write and so it has to have a more generic API, we can focus a bit and say this is just an API for testing classes and we can not build in features for loading a database fixtures because it's just for testing classes in isolation quickly and then throwing them away. And because we're from this sort of BDD tradition, we're trying to make it so you can read this specification, and as a human who understands PHP, you can read the specification and kind of understand what is it this class is supposed to do. So we designed the API to try to make it readable and less about validation, it's more about this is what the object should do, less about I'm gonna test this, this is the case. That might all be a bit abstract so we'll get more specific. So PhpSpec was started by Travis Swicegood and Padraic Brady in 2007, and it was inspired very heavily by a tool called Rspec, a lot of BDD practitioners started off in Ruby. The early cool BDD tools got developed in the Ruby community around the same kind of time that Rails was getting trendy and Ruby was the future if you remember. And this book's really good, all code examples are in Ruby but it's a really good sort of BDD testing book. If you want to check it out. So Padraic and Travis started Rspec, it never got to a 1.0 release, it was very much sort of patterned after, sorry they started PhpSpec, it's very much patterned after Rspec and some of the stuff felt a bit Rubyish. I'm not going to say it was a port of Rspec but it was very close to how Rspec behaves. Now my friend and colleague Marcello became lead maintainer. Because if you know Travis and Padraic, they're involved in loads of open source projects, made arguably too many, and they didn't have time to carry this thing on so Marcello got involved, released PhpSpec 1.0 and this is when I sort of started to become aware of the project. Then at some point, Marcello and Konstantin, Konstantin who wrote Behat, they decided to write version 2 as a complete ground up rewrite and they addressed some of the problems with version 1. So there wasn't any backward compatibility between 1 and 2, it was a new project effectively. Marcello and Konstantin got it as far as beta versions for version 2 and then it stayed in that state for about a year, at which point I was using it, people in Enveeker where I work, we were using it on projects successfully, tagging it against devmaster. I started to get pissed off that there wasn't a release. So that's how I got into open source. Just driven by there not being a release recently, I started contributing. Closing off bugs, helping make some new features, and we sort of got it over the PhpSpec 2.0 release. And because I was doing so much work, I ended up taking over the project. So what was the point of PhpSpec? It was optimise the tests to work as descriptions of the behaviour. So optimise the tests for readability, optimise the API to be concise and make it clear what you're testing. To encourage good design, and this is probably the toughest thing, we omit any features that help you test what we think is a bad design. So it's hard to test, if you make it hard to test bad code with your tool, it means if people are doing tests first, they can't write bad code. However this means it's not the ideal thing to apply to your legacy classes. If you're on a legacy project and you're writing new code you can use PhpSpec, but like bolting it on retrospectively to your old classes, that kind of thing, it's not gonna work. We want to encourage the TDD cycle, so we want to make it easier for you to write the test first than it is for you to write the test afterwards. So this means you have a bunch of convenience stuff that means if you write the test first, it's gonna be easier it's just to kind of brainwashing you into doing TDD. So that's the shiny stuff we'll show that looks handy, then sometime later you realise you're always writing the test first, we've got you. So the way we enforce that is by using it constantly for everything, and then finding the bits in my workflow and other people's workflows that feel like they're a bit clunky, and thinking how could a tool help you with that? How can a tool make that workflow smoother? And to address some of the issues with PhpSpec 1, we wanted it more in a php paradigm, less Rubyish, make it something that looks like PHP, conforms of a PHP developer's right code. Well, good PHP developers. So since I took over the project for version 2, I actually took over after version 2. It's, the early work on version 2 is just Marcello and Konstantin, working in private, they dumped it into repository, since then it's really been a community effort, I'm maintaining it, but loads of people contribute and if you start using it, please contribute as well, because I'm not that, I'm quite lazy, so it's amazing to just be able to merge people's pool requests after a bit of review instead of having to build all the features myself. So let's get into it, you instal it with Composer, we're on version 3 now. All we did really with version 2 was drop some depth coded things and bump the php version requirements. And this should be all the configuration you need to get started if you're using PSR-0. PhpSpec will read the Composer autoloader and use that to figure out where all your classes are gonna be so you shouldn't need any extra config, you can just start. Psr 4 is a bit more complicated. We're trying to figure out how to auto detect Psr 4 root locations. Someone's working on that right now. So we start with some top level component, we need to say hello to people when they come to our website, because the marketing team think that will improve conversions, if you get a nice greeting. And the thing we need to do is, we're gonna have to make some objects to achieve this. So we're gonna describe what the object should do. So it's like the TDD cycle, we're gonna start with the description. And we describe it using a specification, a specification is a php class that contains examples, methods called examples, in each example is meant to say in this case, this is what the object will do. In this scenario this is what the object will do. So obviously that maps to a test case and test methods. We're trying to say so for example in this situation this is how our object will behave. And we can start by using the command PhpSpec describe and then the name of the class. So we need to call the class something. So I just name the class, I can use namespaces, whatever. And in this project I've just got the Composer JSON I showed you, it's installing PhpSpec, and nothing else yet. I've got spec in a source folder with nothing in. So when I say describe greeter, PhpSpec generates a new specification for me. So if I look in the spec folder, come on I didn't ask for that, if I look in the spec folder, there's a greeter spec that describes what the class we're gonna write, how it should behave. So any behaviour we have by default is it's initializable and it should have a type. That's a good starting behaviour for a class, right? It exists. Okay. So the next step is I need to check if the specification matches reality, so for that I use the command PhpSpec run. What do you think will happen? You'll fail, right? Because the thing doesn't exist, yeah. Said it fails in a specific way, I'll read out what it says. One example, one broken, so it's not actually a failure, broken means something went wrong while trying to execute this. I tried to run the thing you described but some php thing broke, and what broke is the class doesn't exist. So we've tried to optimise this workflow, because how many ways are there of fixing class not existing? There's one. So any time when there's only one way of doing it, probably some computer should do it for you. So it says do you want me to create phpWorksGreeter for you? I can say yes. The class greeter has been created at this path, and then it runs the test again and this time it's green, because the only behaviour is the class exists and is instantiatable. And in the source folder, there's a greeter, which, is just a class. Ignore the final thing, that's my personal template. You can template it. So that's pretty simple, so I describe what the behaviour is, the tool checks if it's true, tests pass, green now, so my next thing is to describe how that class should behave. No one's gonna crash, that's good. Did that. So now we're gonna have to come up with some real behaviour. So you give an example, an example is in a particular situation this is what should happen. And you can talk to your pairing partner if you have one of those. What's the first thing it should do? This is a point with TDD, we're gonna break down solving a complex problem into small steps. So what's the next small step we're gonna take? When it greets someone it should return "Hello". In TDD would say oh we have to write what failing test should we write next? Which is harder to think about. In this BDD cycle, it's more sort of what's the next thing it needs to be able to do? When it greets it should return "Hello". So open my specification. Got that one example about it being initializable. So I have to describe that when I greet someone, it should say "Hello". It says hello. This in a spec refers to the object we're describing. So on a technical level, forget about that a second, on a technical level what happens is the spec proxies the calls through to the real object and then checks what comes back. When you're writing it as a description we're using the fact that there's a keyword called this in PhP and saying this object we're talking about, let's describe it's behaviour. So this greet shouldReturn 'Hello'. See how close that was to the sentence? We're trying to make this API quite expressive and easy to understand. Don't worry about my funny font that does the arrows. So what's gonna happen when I run it? - Fail. - Why would it fail? - There's no greeter. - There's no greet, right. Says it's broken, so the one example passed, one example was broken. Because the function didn't exist. Do you want me to create a method called greet for you? Yes. So now it's red, red means fail, so it didn't break. But the error is I expected hello and I got null instead. And it's because the class, yeah the tool can generate the method for you but some logic is more complicated. There's actually a way, there's a flag I can pass called fake just for this case, or you can turn it on in your own personal preferences. Where in these cases it will prompt and say do you want me to make the greet method always return hello? I'm gonna say yep. And then all the tests pass. So that's, not everyone likes that, it only does it when the method's empty, so you can turn that on if you like that cycle, and the point is, by writing the test first it's been easier for me to write this class, I haven't had to do anything yet. This is just, it's really good for new people to get them into the TDD cycle. You might feel it's gimmicky, but when you're using it all the time, I haven't made a class, I haven't typed a class whatever for ages. That's not the point of the tool, it's just a way to get people into this TDD cycle, the point of the tool is this expressive syntax we're using to describe behaviour. And trying to optimise the workflow. Yep, done that. So how do we describe values? You saw should return, I called a method and I said it should return, and matchers, if you've used PHPUnit they're a bit like insertions. So the way you use them is you call a method and then you say something about what the result of this method should be. So this should return hello, the sum of 3 plus 3 should equal 6. GetEmail should return some sort of email object. GetSlug should match this regx, getNames should return an array that contains this value. It's a bunch of built in assertions. The opposite works in each case, so you say should not return, should not equal, should not have time, that kind of thing. And extensions can add extra assertions pretty easily, extra matches. There's a couple of special ones, so if you say should be something, it looks for a method called is something on the object and checks it returns true. Matches because that's a pattern that loads with php developers seem to use is a common naming thing, so, should be seems to work, and I don't say should have, if you say should have something, it'll look for has something method and check it returns the right value. They're optional of course. And you can also do find your own matchers, so, in this case I'm getting some JSON and I'm saying it should have a JSON key called username. I can pretty easily define a callback that does that assertion for me. Can do that inline in the example, so inline in the spec. I can also write matcher objects and have it in my projects and just sort of reference them in a config file and they get picked up. So there's something you're checking a lot, it's quite easy to add a new matcher. So that's the testing objects bit, testing objects on their own bit, but the most important thing about objects really is that they talk to other objects and in case that's the, he regretted calling it object oriented programming, because people think it's about the objects. It's actually about the messages between the objects and the way the objects talk to each other, that's the important bit. So we have, we want to be able to describe how one object speaks to another. Says come up with another example, when I greet someone called Bob, it should say Hello Bob. Makes sense, right? So we started with a simple case, that's important. Started with a simple case where it's just saying hello, and now I'm trying to come up with a more complicated case. You shouldn't start with the most complex case, because then, your test that triggers the most complex case you're gonna have to write code that solves the entire problem and that might take a long time. You start by solving the simple cases. You'll find when you have to solve the complex version, it's all done for you, half of it's done for you already. So when it greets Bob it should return Bob. And the interaction between our object and the person is we're gonna ask the person what their name is. So our greeter's gonna say hey, what's your name? This is called a query. There's roughly two ways objects can interact, they can either tell another object to do something, Persist this to the database. Approve this invoice. Commands when I don't really expect anything back. Or queries where they go hey give me this data. So I start with queries. My greeter's gonna ask the user what it's name is so that it can say hello. And so we use what's called a stub, a stub will use the method willReturn. So I somehow have to describe how it's gonna interact with a person. What should I call it? The example? Yeah. It_says_hello_by_name, something like that. So this greet, in this case, it's gonna say hello Ciaran. But that doesn't look right, why would it say Ciaran instead of someone else? We have to pass in the person that you're gonna say hello to so I'm gonna have, like when you greet this person, you're gonna say hello. So now I need to think, well okay, there's gonna be a person object. The way we ask PhpSpec to produce one is just by typing into. Put a nice namespace on it. So when I greet a person, it's gonna say Hello Ciaran. So why Ciaran instead of something else? I have to tell the testing tool this is someone called Ciaran. So I have to set this stub up, person, getName, I hate getters, but, you know. WillReturn Ciaran. So trying readable because a person when you ask the person for their name they're gonna say their name is Ciaran. It's just kind of the preamble to the spec. And the spec is when I greet this guy it's gonna say hello Ciaran. Make sense? Kind of readable. So what's gonna happen when I run it any bits? Break why? There's no person, right? Yeah, this is good, we're kind of gonna figure out what person looks like by talking about how someone else is going to use it. So yeah, broken, purple. Do you want me to make an interface called person? And I'm gonna say no. So why is it asking about interface instead of a class? So because of this thing, we noticed if you immediately create a class, you kind of lose the opportunity to create an interface. And the uncle Bob's interface segregation principle is no client should be forced to depend on methods it doesn't use, so I start thinking about a person, does this object, does this greeter really need to depend on person? I feel like my person in my system is gonna do other stuff as well, it's gonna have more to it than just a name. So if I depend directly on person then person might end up with more API. And actually my class doesn't depend on all these full methods, my class just wants to know about a name. So it makes more sense for me to have an interface called something like named, you can have your own naming convention, name interface. If you have to. I'd rather depend on an interface, that question makes me sort of think I'd rather depend on an interface. So I'm gonna choose it's named. Oh, come on, come on. I don't actually know how to use PhpSpec at all. So it's saying do you want me to make a named interface? And this time I'm going to say yes. You're calling getName and the named interface doesn't have that method in it, do you want to add that to that interface, yeah. And now it fails, so if we look at what it's created, I've now got this interface for named that I've defined by thinking about how the greeter's gonna greet people, I've kind of generated an interface and later I'm gonna have to make some new implement syntax. And it's failing now with the red because I expected hello Ciaran but I got hello. Because I said when you greet someone whose getName returns Ciaran you get Hello Ciaran. But actually it clearly doesn't do that. So now I'll write some code. So I've got to accept a named thing. And then just run the test. It's good to get used to running the tests all the time. Oh, doesn't like it, oh because sometimes we call it without a parameter. So I kind of have to make that optional. I can do this thing now, can't I? Oh my God. Nope, oh no you can't. That doesn't let you not specify, it lets you pass a null. So if you want to make it optional, you still have to pass. So it's still failing because it doesn't say Hello Ciaran so now I have to do, I have to make it pass. So if you're failing tests, you want to make the test pass as quickly as you can. With the minimum kind of mental effort. Doesn't mean the smallest like code golf style amazing solution, means just like whatever you think of, do it that way. So I'm gonna do return, if, if named return hello dot named, can't type, can't type in front of people. So that passes. So this is the point where you refactor and make your solution better, so it's good to get it passing. It's easy to make it better when it works. Like if your car's broken, you can't start tuning it, you have to fix it and then start tuning it, so, there's probably something I can do better here. Well what I could do, okay, what I could do is have a thing called name, and in this case, it's empty. And then I do this thing. So now what? Oh, expected hello but I got hello with a space on the end. So now I trim that. That' works, still doesn't look good. Any suggestions? Oh I could turn that into a turnery couldn't I? It is name, that or that, so I can get rid of all of this and all of this. No, I did it wrong, what did I do wrong? Named, see how the test is kind of supporting me. That moment of panic I just had, I could see how to fix it, what other option we meant to hit undo, because it was passing like 10 seconds before, and then I thought I think I can make this change and then it fails. I could have just hit undo and gone okay it's working again. I'll try that refactor again. That's why having passing tests is so important, because you always know it literally just worked, so that thing I tried didn't work, but I can go back. I get about 2 minutes and then I kind of time out and get check out dash dash. Okay so it's passing, can that get any better? Oh what? Do what with a what? Sprint F? I can inline this, can't I? I don't think php has done that correctly. So I hit Undo, it's all good, I've hit Undo and the test tells me I'm safe again so my heart rate goes down. So you can see what it did, oh yeah it's because it needs some brackets, right? Yeah, that's good enough, so now I'll go to the next test. So I'll raise above the jet brains about that. That all makes sense, right? Oh I know how I can do it without the trim. No, we can spend too long on that. So we've described how if you give it something with a name it will say hello to that name, otherwise it will just say hello. So now we can go through the whole cycle again quicker, and we're gonna now make a person, I'll do it faster. So what's the first thing I do to make a person? PhpSpec, I have to describe it, I'm gonna describe a person, I'm gonna get a specification, I shouldn't have described a person, I should have described PhpWarks here, person, so to meet that thing. I'm gonna run it, says there's no such thing as a person so I'm just gonna let the tool fix that. What is there about a person? It_is_a_named, so it should have the type named. It fails. Someone's actually working on getting PhpSpec to fix that for you but it doesn't yet because it turns out to be really complicated. So what do I need to do to pass that test? Implements named. So what's the next behaviour? You go through this cycle, so, I'll make that bigger, sorry. It_knows_its_name. It's name is gonna be Bob. Notice when I was stubbing it in the other test, I said willReturn to sort of say when you're asked this is the data you're going to return, will is describing how we're setting up our doubles, our stubs or our knocks, should is always checking something. So getName should return Bob, why is it gonna be Bob? This be, how is that class gonna know it's called Bob? I'm gonna say be constructed with Bob. So when the class is constructed with Bob and then I call getName it's gonna return Bob. I haven't written this class yet, we start here in the description, so you start by thinking about the naming of the class, thinking about what the methods are called, thinking about what parameters those methods take. Do you want to give it a constructor? Because I'm trying to use constructor, yes. Now I've added a constructor, it needs to be constructed in both places. Cool, now it's failing because it expected Bob but got null. That's because this is the class. So now I have to write the code that implements this complicated behaviour. So gonna get name. Return this name. Seems alright. All this rubbish. That passes, so now we've got a class that, the behaviour we've just described was it knows its name. It can be renamed, so let's say we've got someone called Bob and then we call renameTo, getName, shouldReturn 'Alice'. Do you want me to add renameTo, yes. It got Bob, it didn't work. So that's a pretty, this is pretty simple behaviour to implement, we just need to set the name. Passes again. Hope that's given you an idea of the workflow. I'm spending my time at start thinking about what's the next behaviour, what's the API, what's the method called, what are the parameters, what's the behaviour? To an extent, the tool generates some of that boiler plate stuff for me and then I think okay, how do I achieve that? So that's one type of collaboration, the type of collaboration where one object asks another object for something. It's called a query, you always have commands, sometimes we care that an object calls a method on another object. I care that the email gets sent, I care that the invoice is approved, but the outcome of the example is that some method gets called on another object. used a thing called mocks. It's when it greets Bob, hello Bob should be logged. How am I going to describe that? I'm not going to describe that in terms of some return value the outcome of the example is the logger gets called with a method, the log gets written to. So we'll show you how that works. We use things for mocks or spies. They're very similar to stubs, instead of willReturn and stuff like that we're using methods like shouldBeCalled. The outcome of the test is this method is called, or should have been called. So example for the greeter is it logs the greetings. Let's go do that with a logger. And when I greet, then what should happen is logger, log, I guess, shouldHaveBeenCalled. So we do a different collaborator object. I'm saying when this object greets, the logger's log method should have been called. So how did this object know about the logger? I can run this already and I'll get prompted, there's no such thing as a logger yet, do you want to make it? Logger interface doesn't have a log method yet, but you're talking about it here, do you want to create it? Yes. And then it fails because it never got called. How might my,how can the greeter know about the logger? Inject it in the constructor, sure. BeConstructedWith the logger. So now I'll say hang on, the greeter didn't have a constructor, do you want me to make a constructor? Yes. That fails because in all the other examples, we didn't give it a logger. The greeter now has this constructor. We touch, oh. It has this logger thing which isn't being provided in the other examples. So what I can do, outside of all the examples in this spec I can use the function let. Let this beConstructedWith a logger. And now I don't need to have that in each example, it's kind of used in each example. And kind of magically this instance of logger is the same as this instance of logger. Kind of magically. So now I have one failing test, we didn't log the message. And that's because there's this logger and we're not doing anything and somewhere in here we need to log it, so I can construct a variable message. I can initialise that field as a logger. And then here I guess I do this, logger log message. And you see that everything passes. If I comment that out. It contains now I want to log the message. So in between failing tests I'm writing quite small bits of code. This example doesn't have a lot of main logic, so I'm not having to do a lot of thinking in each step when it's a really complex problem you're having to do a little bit of thinking in each step. Because you're breaking down a problem across multiple steps you're doing small amounts of thinking. So what have we built? Now we ended up with these types, there's a greeter that depends on an interface called named and we've made an implementation of the named interface. We also defined a Logger interface. We're probably not going to make a concrete thing, it's probably going to be an adapter to some logging library that we're gonna decide later, monologue probably. We can also run PhpSpec and it would output all of the examples for each class. So because we'll see maybe from that, these things become a way of understanding the code, you can hopefully read this and understand pretty much what the person does. It's constructed with Bob and has a particular type. It's constructed with Bob, it should say it's name's Bob, if it's constructed with Bob and you rename it to Alice it should say it's name's Alice. That's pretty simple, we can kind of understand what the object does by reading the spec. So PhpSpec focuses on being descriptive, tries to make the really common boring annoying stuff easy and automated, it tries to get you to do this pattern where you're designing first through the specs and then you're writing code. Current, I think the last release was 3.2. What is new matchers for warnings, because some people simply decided to start omitting those deprecation warnings and people want to test that. Previously ignored warnings is the thing people want to test because everyone uses exceptions now. But we added some stuff for testing warnings, and some matchers for testing iterations, iterators. Should iterate as this, this, this stuff. Under course of development, we're gonna reach version 4 in June, we do a kind of annual release cycle where each summer we drop deprecated things and bump some of the memo versions so version 4 is only gonna support php 7. Version 3 is gonna live until 2018 and then probably die. But die just means we're not fixing it, we're not gonna delete the tag or anything. And some things we want to build. When I said that the person should implement named, I want PhpSpec to say do you want me to make it into an interface for you? But that refactoring when you drill into it has loads of edge cases so that's kind of under development. We want to make it easier to use PSR 4, the moment you kind of have to replicate your different namespaces and folders into the PhpSpec xaml we want to read that from Composer. And somebody's working on that. And I've got a branch where we handle fatal errors. Even in php 7 there are fatal errors that you can't catch. We've got a strategy for dealing with that that involves processes. And we want to roll that out because it will make things on php 5 easier, php 5 users will be able to catch errors. So I want to get that into version 3, so it's available for version 3 that will still support php 5. This is me, I work for Enveeka, I should say that because I adapted enough to come here. So if you want to know about Enveeka we do software development and consultancy and training, and I do a lot of training, so if you want training come and talk to me. I maintain PhpSpec and i really want more people to be involved in helping the project and do pool requests. I also co-organize BDD London meetup which is every two months in London, which is 20 pounds return on the train and only takes an hour each way. And we're doing a twice monthly meetup where we talk about BDD with people from other programming languages and stuff. The videos are available online, and I guess if you've got any questions I can answer them. Does anyone have any questions, yes? Yeah you can, so here you're describing construction. We don't actually instantiate the class until you do something like this, that prompts you to instantiate it, so you could, for instance here, it doesn't make any difference in this test but you can override how it's gonna be constructed, and then it's actually constructed when this method's called what you can't do is then here try and change how it's constructed, you'll get an error message. So we support thing like named static constructors. Yes, you can do like, it's hard to explain. Since, let's do an example, instead of beConstructedWith Bob, I can do beConstructedNamed Bob. beConstructedNamed Bob. And when I run it, it'll say do you want me to make a static method called named? Yes, boom. And it will, you'll be able to construct the person by calling this. And if I haven't already got a constructor, it will say do you want to make a private constructor as well? If you don't have an existing constructor. But not everyone uses named constructor so I didn't mention it. But yeah you can override how the object's constructed as long as it's before something that would need the object to exist. Anything else? That was quite a deep dive, any more easy questions? - [Audience Member] Are there many notable projects that have been used? - At Enveeka we use it on loads of projects. Silius uses it which is an ecommerce platform. At one point, Laravel decided to start bundling it. I don't think that was Taylor's idea. But they added it as a dev dependency and then I think it later got removed. But it's hard to tell from packages because there's loads of Laravel projects depending it and then completely not using it. So it's really hard to tell from the stats. Silius is probably the biggest open source project I can think of. Because they use PhpSpec and Behat. - [Audience Member] Do you have, like a like an example that maybe you've done recently? Like just see how we bail out of, how crazy they can get? - No open source, PhpSpec tests itself with PhpSpec. And Behat. I think Silias is a good reasonable example of some big ecommerce framework that's trying to use this stuff. Most of the time it's used for the core domain model, because it's classes and not infrastructure, but frameworks tend to not have a domain model, unless it's something like an ecommerce framework that you're gonna sort of instal, libraries tend not to have it. We've used it for creating stuff like magenta extensions. Where we needed a nice clean, not everyone makes extensions as a nice clean domain model, but, we wanted a core domain model that's completely tested and then a layer of stuff that adapts it into a magenta extension. I think some of those are open source under Enveeka organisation. Yes. Normally this is too low level to talk to businesses about. So you do try and capture the words and phrases they're using in the business to describe these things, but, start asking, like if you're making a car for someone, asking them what size the nuts and bolts should be is like there's a mismatch there, so normally when I'm working on software, we've had conversations with business experts and tried to write scenarios we can use Behat to test, and then with those failing scenarios, then you're using PhpSpec, so, iterate through until the scenario passes. But I think it's really aimed, it's not readable by nontechnical people, it's meant to be really readable to people who understand php. Yeah, yeah. Yeah, when I've done event storming and then taking it towards a test, I then turn that into Behat test. And then,event storms, the past events map well on to given, commands map well on to when, and the events it produces map well on to then. So you can write that out as and use it in Behat or you can use PHPUnit or something to test the command is the right thing. Any others? Yes. Yeah, so, autonomous vehicle sounds cool. So if you think about your application, you've got different layers, you've got your core domain model which is how it really works inside. And then you tend to have, maybe you don't have it but you should. I'm gonna do an octagon just to confuse people. Like an application layer, this is the services that your application exposes that things like controllers would use. It's good to have that separation if you don't. And then you have stuff like user interfaces and databases, like frameworks other people write or infrastructure other people write that you have to plug into. So like from your symphony app, your calling methods in the application layer. So in terms of where these tools are aimed, the PhpSpec really kind of aimed at this core domain model. When the objects inside the domain represent the concepts of the domain and how they interact, and they're just pure php objects. I'll draw that bigger because you might have lots of domain and a little bit of application. And then things like Behat they tend to address this application layer, because Behat scenarios are written from a user perspective so they're representing things a person has to be able to do with the system. Whether it's through the UI or by ringing you up or whatever these are the actions a user has to be able to accomplish through this system, so you want to have an API here that corresponds to the things people do. Like if people constantly approve invoices you want a method called improve invoice. So driving this layer that exposes a service that knows how to prove invoices with use cases from people from Behat means that gonna be nicely aligned, but in the middle you might need to have a concept of an invoice and a concept of what approval means and that's what you're driving with PhpSpec. These domain concepts. A lot of people don't separate the domain and the applications, that's okay. Just makes it harder to rejig your application because you already control using as objects. And this Api can be services you're exposing, it can be having a list of commands that it can accept, something like that. Any others? When does this time out? If you have more questions I can do this all night. Ask me in the pub? Thanks everyone, hope you enjoyed it.
https://www.pusher.com/sessions/meetup/php-warwickshire/driving-design-with-phpspec
CC-MAIN-2019-39
en
refinedweb
XSD include treated like XSD import when loading WSDL for proxy service ----------------------------------------------------------------------- Key: SYNAPSE-486 URL: Project: Synapse Issue Type: Bug Components: Proxy Services Affects Versions: 1.2 Environment: Windows Reporter: Joseph Caristi Priority: Minor I am attempting to proxy Axis services using Synapse. The <types> section of my WSDL has the following instruction (which works perfectly in Axis): <xsd:include When I point Synapse at the WSDL and XSD, I get the following exception: org.apache.ws.commons.schema.XmlSchemaException: Schema name conflict in collection. Namespace: This is the only namespace for my service. Both the WSDL and XSD have the same namespace. This is why I am using an xsd:include rather than an xsd:import. This all worked before I extracted my types into a separate XSD file. Complete details available in Synapse user forum: -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org For additional commands, e-mail: dev-help@synapse.apache.org
http://mail-archives.apache.org/mod_mbox/synapse-dev/200812.mbox/%3C1995393887.1228225244322.JavaMail.jira@brutus%3E
CC-MAIN-2019-39
en
refinedweb
1,165aijuten started a new conversation Lazy Loading Appears Not To Be Working For One Subresource Hey guys, I'm pulling my hair out over one particular issue. I'm sure the problem / solution is incredibly simple, but I don't seem to be seeing it. I'm using Fractal () for a Laravel-based API. I'm also using Lazy Loading on "include"d models. Very simply, I have a relationship, which appears to be having an N+1 problem. The problem appears to be my Slot->Booking, relationship, as when removing this nested resource from my included payload, I get significantly (~90%) fewer SQL queries. Slot Model <?php namespace App; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\SoftDeletes; class Slot extends Model { use SoftDeletes; protected $fillable = ['capacity', 'starts_at', 'ends_at', 'created_at', 'updated_at', 'group_id', 'session_id']; protected $dates = ['deleted_at', 'starts_at', 'ends_at']; public function group() { return $this->belongsTo('App\Group'); } public function bookings() { return $this->belongsToMany('App\Booking'); } public function bookingSlots() { return $this->hasMany('App\BookingSlot'); } } One thing worth noting, is that the slot->booking relationship is through BookingSlot, which itself has a model (there is supplementary information stored within this model). Interestingly, if I do slot->bookingSlots->booking I do not appear to get the N+1 issue. Can anyone point to a reason for this happening, and if so, a workaround? Please let me know if there would be any advantage to me posting further code. taijuten left a reply on Issue With "Activate New Release" Deployment Hook taijuten started a new conversation Issue With "Activate New Release" Deployment Hook I'm having issues with the Activation of New Release. The fail is logging the following output: Linux Detected... Release Activated (20161211152555)! PHP-FPM Detected: Reloading [sudo] password for forge: Sorry, try again. [sudo] password for forge: sudo: 1 incorrect password attempt Either my Google-Fu is failing me today, or I'm not able to find any others with the same issue. It looks like sudo password is being required, and the commands following the activation are being inserted as the password. Any ideas on how to get around this? taijuten left a reply on Clearing Cached Relationship After Attach Or Sync taijuten left a reply on Clearing Cached Relationship After Attach Or Sync @jeffdavis the intention of the for loop is to remove any which are already attached to the group, but not part of the submitted lessons array. Then after that, we attach all the of the ones in the array. Essentially, this is replicating the sync functionality, but in a way that an Event can be fired when removing a lesson from the group. taijuten started a new conversation Clearing Cached Relationship After Attach Or Sync I have the following snippet from a controller, where I'd attaching one resource to another. However, when I perform a die and dump of the resource, or return this to the client, it seems to have the data from the previous request, not containing the changes affected by the attach. if($request->has('lessons') && Lesson::where('client_id', $group->event->client->id)->whereIn('id', $request->lessons)->count() == count($request->lessons)) { // check to see if we should remove any lessons from the group foreach($group->lessons as $lesson) { if(!in_array($lesson->id, $request->lessons)) { $group->lessons()->detach($lesson->id); Eventable::fire(new GroupLessonRemoved($group, $lesson)); } } $group->lessons()->attach($request->lessons); dd($group->lessons); } For example, where I submit my request with a new lesson to attach, the output will show the lessons without that new one attached. If I refresh, it shows attached. What am I missing here? Any help would be greatly appreciated taijuten left a reply on Making Dynamic Eloquent Scopes shameless bump :) taijuten started a new conversation Making Dynamic Eloquent Scopes Evening all, I have a certain problem that I need to solve, and I'm unsure of the best way to approach it. Here's an example: I have a model: School. A School has many Students, and each student has many Guardians The many-to-many relationship between Students and Guardians has some other properties, such as is_legal_guardian (boolean) and order (integer). All of this is fine, however, I need each School to be able to set up what I can only describe as a filter, or a preset, so, that when retrieving the Guardians for each Student, the results are filtered by whatever the school has defined for a particular filter. An example of this might be a school sets up a filter called "Legal Guardians who are first point of contact", where they only wish to return Guardians of each Student where the legal_guardian is true and the order equals 1. The only way I can think of doing this is having a GuardianFilter model, and having Guardians linked to that, but unsure of how that relationship would work, when the relationship itself is defined by the School. I apologise if this is difficult to follow, but am happy to provide further clarification if required. taijuten left a reply on "Class '******' Not Found " Error In Laravel 5 Can you show your POrder php file? taijuten left a reply on Laravel From Scratch 6th Video Fetching Data Have you set up a mysql or similar database server? If so, have you created a database, and a user? Are those credentials in your .env file? taijuten left a reply on What Is The Location Context Of The File Class? I believe File uses the storage folder by default, if you're using local storage. Please see taijuten left a reply on "Class '******' Not Found " Error In Laravel 5 try running php artisan clear-compiled taijuten left a reply on Local Query Scope Failing taijuten left a reply on Guzzle Error With Jeffreys Example When you use get(), it's a response object, as the error hints. So if we have a look at the following: You can see that there's no need to send(), we now need to process the response with these methods. For example, try replacing send() with getContents() taijuten started a new conversation Local Query Scope Failing I'm doing some fairly in-depth query scopes on several of my models. For example, I have the following scope: public function scopeVisibleToUser($query) { return $query->where(function($subQuery){ $subQuery->isOwner() ->orOwnsGroup() ->orStaffPermission(); }); } This refers to other query scopes within the same model: public function scopeIsOwner($query) { return $query->where('user_id', \Auth::user()->id); } public function scopeOrOwnsGroup($query) { return $query->orWhereHas('groups', function($subquery) { $subquery->where('user_id', \Auth::user()->id); }); } public function scopeOrStaffPermission($query) { return $query->orWhere(function($subQuery) { $subQuery->whereHas('client.clientUser', function($subSubQuery) { // events client has authed user. $subSubQuery->where('user_id', \Auth::user()->id) ->whereHas('role', function($role) { $role->where('staff', 1); }); })->whereHas('sessions', function($subSubQuery) { // where session is open to staff $subSubQuery->where('open_staff', 1); }); }); } However, the "or" part of these query scopes aren't sticking. If my user doesn't match the "orStaffPermission" scope, but matches the others, they get no results. If I copy the contents of this scope onto the parent scope, all works as expected. Any thoughts on how I can solve this issue? taijuten left a reply on How To Install Laravel Homestead For Windows ? Homestead still works on windows, with Virtualbox / vagrant. If you come across a specific issue, then we can help. taijuten left a reply on Create User Form Edit It sounds like there are a fair few gaps in your knowledge of how to use Laravel. I'd recommend checking out some of the videos on this site: in particular. To get you going in the right direction: You'll be modifying your view (the form). However, to truly understand what's happening, and how to do other bits, I recommend watching that series. taijuten left a reply on Guzzle Error With Jeffreys Example The URI you're passing into client should be within an array, with a base_uri key taijuten left a reply on Between One To Many Or Polymorphic Relationship It really depends on your use-case. As a rule for polymorphic, if your resource (posts, in this case) shares the same structure but can be attached to many other resources, then use polymorphic. If the structure of the post is going to differ between group posts and user posts, then keep them separate. taijuten left a reply on A Simple Route Is Redirected To /public Folder It sounds like you haven't got your site mounted to the /public directory. How are you hosting? On IIS / Apache / Nginx? taijuten left a reply on Eloquent's BelongsTo, HasOne Etc has means that the other model contains the reference key. belongs means that this model has the reference key. The only exception to this is the belongsToMany which means there is a link table. Both models would have a belongsToMany in most cases. taijuten left a reply on SQLSTATE[HY000]: General Error: 1215 Cannot Add Foreign Key Constraint Do you have any existing records where the value for the rows you're adding the foreign key to, are blank? Alternatively, using mysql command line, run show engine innodb status; Then look for the LATEST FOREIGN KEY ERROR taijuten left a reply on Please Fix Bootstrap Navbar For Tablet Laravel uses the bootstrap navbar in the standard template it is boxed with. Check the documents here: taijuten left a reply on [Symfony\Component\Debug\Exception\FatalThrowableError] Parse Error: Syntax Error, Unexpected '$table' (T_VARIABLE) It sounds like you have a missing semicolon in your migration file. taijuten left a reply on Strange Disappearing Variable In ExceptionHandler You're correct... thank you @premsaurav No idea how I missed that! taijuten started a new conversation Strange Disappearing Variable In ExceptionHandler I have some strange behaviour, when deliberately triggering an exception, that I hope you guys can help me debug. Triggering a methodNotAllowedHttpException gives me the following error: ErrorException in Handler.php line 116: Undefined variable: response The code in question is as follows, within my Handler.php The line number corresponds to the last line within the method, the return. /** * Render an exception into an HTTP response. * * @param \Illuminate\Http\Request $request * @param \Exception $e * @return \Illuminate\Http\Response */ public function render($request, Exception $e) { if ($this->isHttpException($e)) { if ($e instanceof ModelNotFoundException || $e instanceof NotFoundHttpException) { $message = ($e->getMessage() == '') ? 'One or more resource was not found' : $e->getMessage(); $response = $this->errorNotFound($message); } elseif ($e instanceof UnauthorizedHttpException) { $message = ($e->getMessage() == '') ? 'You don\'t have access to this resource' : $e->getMessage(); $response = $this->errorUnauthorised($message); } elseif ($e instanceof AccessDeniedHttpException) { $message = ($e->getMessage() == '') ? 'Forbidden' : $e->getMessage(); $response = $this->errorForbidden($message); } elseif ($e instanceof FatalErrorException) { $message = ($e->getMessage() == '') ? 'Internal Error' : $e->getMessage(); $response = $this->errorInternalError($message); } elseif ($e instanceof ConflictHttpException) { $message = ($e->getMessage() == '') ? 'Request unprocessable due to a conflict' : $e->getMessage(); $response = $this->errorConflict($message); } elseif ($e instanceof BadRequestHttpException) { $message = ($e->getMessage() == '') ? 'Your request was unprocessable - wrong arguments' : $e->getMessage(); $response = $this->errorWrongArgs($message); } elseif ($e instanceof NotAcceptableHttpException) { $message = ($e->getMessage() == '') ? 'Your request held invalid or incomplete data' : $e->getMessage(); $response = $this->errorValidation($message); } } elseif ($e instanceof \Tymon\JWTAuth\Exceptions\TokenExpiredException) { $message = ($e->getMessage() == '') ? 'Token Expired' : $e->getMessage(); $response = $this->errorUnauthorised($message); } elseif ($e instanceof \Tymon\JWTAuth\Exceptions\TokenInvalidException) { $message = ($e->getMessage() == '') ? 'Token Invalid' : $e->getMessage(); $response = $this->errorUnauthorised($message); } else { $response = parent::render($request, $e); } app('Asm89\Stack\CorsService')->addActualRequestHeaders($response, $request); return $response; } I can't possibly see how I could end up without a response. Any thoughts? Thanks very much in advance! taijuten left a reply on Homestead. Incorrect Mapping. taijuten left a reply on Multiple SSLs On One Forge (digitalocean) Droplet HTTP2 appears to be working. I reissued the certificate from the host. I'm starting to think it was an issue with the certificate itself, and perhaps at some point I accidentally got the certs mixed up, and applied the Site 1 certificate to Site 2. Not sure if this could cause the issue, but it might make sense. taijuten left a reply on Multiple SSLs On One Forge (digitalocean) Droplet taijuten left a reply on Multiple SSLs On One Forge (digitalocean) Droplet I've turned off http\2 now. I don't have any default_server This is the result of my cURL request * Rebuilt URL to: * Hostname was NOT found in DNS cache * Trying 46.101.24.124... * Connected to api.sb.zerojargon.com (46.101.24.124): OU=Domain Control Validated; OU=PositiveSSL; CN=events.zerojargon.com * start date: 2015-11-07 00:00:00 GMT * expire date: 2016-11-06 23:59:59 GMT * subjectAltName does not match api.sb.zerojargon.com * SSL: no alternative certificate subject name matches target host name 'api.sb.zerojargon.com' * Closing connection 0 * SSLv3, TLS alert, Client hello (1): curl: (51) SSL: no alternative certificate subject name matches target host name 'api.sb.zerojargon.com' forge@zj-dev:/etc/nginx/sites-available$ taijuten left a reply on Multiple SSLs On One Forge (digitalocean) Droplet @bashy I just checked, and my NGINX does support SNI. I've scanned through my nginx config files, but can't see anything to cause an issue. I'll copy them here: server { listen 80; server_name events.zerojargon.com; return 301; } server { listen 443 ssl http2 events.zerojargon.com; listen [::]:443 ssl http2 events.zerojargon.com; server_name events.zerojargon.com; root /home/forge/events.zerojargon.com/current/dist; # FORGE SSL (DO NOT REMOVE!) ssl_certificate /etc/nginx/ssl/events.zerojargon.com/15312/server.crt; ssl_certificate_key /etc/nginx/ssl/events.zerojargon.com/15312/events.zerojargon.com-error.log error; error_page 404 /index.php; location ~ \.php$ { fastcgi_param PHP_VALUE "newrelic.appname=events.zerojargon.com"; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } server { listen 80; server_name api.sb.zerojargon.com; return 301; } server { listen 443 ssl http2 api.sb.zerojargon.com; listen [::]:443 ssl http2 api.sb.zerojargon.com; server_name api.sb.zerojargon.com; root /home/forge/api.sb.zerojargon.com/current/public; # FORGE SSL (DO NOT REMOVE!) ssl_certificate /etc/nginx/ssl/api.sb.zerojargon.com/15314/server.crt; ssl_certificate_key /etc/nginx/ssl/api.sb.zerojargon.com/15314/api.sb.zerojargon.com-error.log error; error_page 404 /index.php; location ~ \.php$ { fastcgi_param PHP_VALUE "newrelic.appname=api.sb.zerojargon.com"; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } taijuten left a reply on Multiple SSLs On One Forge (digitalocean) Droplet taijuten left a reply on Pros And Cons Of Most Favoured PHP Frameworks The writing style of the author makes me really doubt any credibility. There are also no real examples, or version references. taijuten started a new conversation Multiple SSLs On One Forge (digitalocean) Droplet So I'm led to believe that you can have more than one SSL on a single droplet, however I'm having some trouble achieving this. I've added two separate SSL certificates via forge, and have set up my nginx configs appropriately. The first one worked a treat, and I have no problems with it. However, when I try to view https on the other domain, I get a privacy warning. If I accept this warning, it loads the content of the domain with the first SSL certificate. Any ideas, or any other info I can provide to try and diagnose this? taijuten left a reply on Near Me? I'm between @mstnorris and @bashy - Hastings, East Sussex :) taijuten left a reply on Cloning Auth Layer @meeshka Sorry, been away for the evening. More like this (this is from a project of mine, so adapt to your needs). // these are "unprotected" routes Route::post('users', 'UserController@store'); Route::post('token', 'UserController@authenticate'); // these routes in here will require the middleware to run Route::group(['middleware' => 'checkToken'], function() { Route::get('users/{users}', 'UserController@show'); } taijuten left a reply on Cloning Auth Layer You can secure controllers with the middleware on constructor, no problem. However, I find more often than not, that I need to control more granularly. In this case, I put all routes that require authentication in a route group. Not only does this mean your constructor code isn't repeated, but also allows you to have your routes for authenticating and registering a user outside the middleware, whilst the rest of the controller within the middleware. taijuten left a reply on Cloning Auth Layer I'd probably deal with this using sessions. Give an input box for superadmins, where they can put in the username of the person they wish to impersonate. Your Authorization policies could then check for a user in the session, if the original user is a superadmin. However, as @ohffs mentioned above, I'd recommend against doing this, particularly if the website is going to be used by the public at all. That's because certain information, most users would like to think only they can access. This is stuff such as payment history, personal information etc. Although this data is available to anyone who can access your database directly, opening it up to other "untracked" users on your actual site can be a little iffy. taijuten left a reply on I Need To Retrieve Data From An External Api Depends on where you're going to use it. Could be as simple as using parse_json(); taijuten left a reply on Laravel 5 - How Can We Access Image From Storage Alternatively, have a look at By default, this will render images from your storage, and allow you to do all sorts of things with it such as cropping, color correction etc. taijuten left a reply on Cloning Auth Layer Using Authorization, you can create policies to handle this. e.g. Your PaymentController might have: // to update payment info public function update(Payment $payment) { if(Gate::allows('update', $payment) { // your code to update the info here } } Your PaymentPolicy can then have a method for each "action" class PaymentPolicy { public function update($user, $payment) { // check user is of the right type to do this } } The same principles can be applied for all of your actions. taijuten left a reply on Cloning Auth Layer If you use Laravel's authorization: Then you can have a superUser check, to see if a user is a superUser. Programmaticaly, there's no reason that a user couldn't turn this on and off. E.g. switch to an "Admin Mode" if they are able. taijuten left a reply on Laravel Routes Please can you further explain what your actual question is. If you have 100 tables, perhaps your database structure could be improved? Also, I'm not sure how you would expect to deal with this otherwise. Using vanilla PHP you would still have to have different pages for dealing with each resource. taijuten left a reply on Cloning Auth Layer You should only ever need one type of authentication for your application. Why do you not like the thought of differentiating users by role? There is no difference in security between the two ways. taijuten left a reply on Put My Laravel Site Online Please check You need PHP 5.5.9 taijuten left a reply on Hide Important Files Like .env It's not done in your htaccess. What's your hosting environment? Ubuntu / Windows? Do you have SSH access, or a control panel style of things? taijuten left a reply on Hide Important Files Like .env As mentioned in the other post, this is why you point your site root to /public, not your Laravel root. taijuten left a reply on Authorisation: Making Policies For Actions On Other Users @xtremer360 I put the solution at top of my original post, and adjusted the code to working :) taijuten left a reply on Authorisation: Making Policies For Actions On Other Users Solved, spotted my error as soon as posted (sorry guys, my bad code) taijuten started a new conversation Authorisation: Making Policies For Actions On Other Users Edit: holy crap, caught out by a single equal instead of double. Been a while since that last happened! Using the new Authorization methods in Laravel 5.1.11, I'm having some issues when performing operations on other users. Here's the Show method on my UserController public function show(User $user, Request $request) { if(Gate::allows('show', $user)) { return $this->respondWithItem($user, new UserTransformer); } return $this->errorUnauthorised("You are not authorised to view this user's details"); } And the test policy I've created: public function show(User $user, User $focalUser) { return ($user->id == $focalUser->id); } However, this always returns true, as both $user and $focalUser become the authenticated user, instead of the user being passed through from the controller. Any ideas on how to get around this?
https://laracasts.com/@taijuten
CC-MAIN-2019-39
en
refinedweb
How to get Menu import { Menu } from 'fds/components'; Type: Component A type of list component generally used within a navigational context. Expects MenuItem, MenuItemWithDrop and/or MenuGroup components as direct children. Menu handles the hovered state of its child menu items and automatically renders separation lines between MenuGroup components.
https://documentation.fontoxml.com/api/latest/menu-26568423.html
CC-MAIN-2019-39
en
refinedweb
42722/go-lang-vs-others-python-java-rust I've been hearing a lot of mixed reviews about GO Lang these days. Some developers tell me that the syntax was a reason for them to transition to that. Some other developers say that syntax is too confusing. I came across this post on TechRepublic about GO Lang:. Now i'm just curious what other developers on Edureka Community are experiencing...Any feedback about GO Lang is appreciated. :) Hi there, I've worked on only a few projects that was implemented using GoLang, so take my opinion with a little salt. First of all, different programming languages exist for a reason. If one programming language would fit all purposes, then the world of software development wouldn't be as chaotic as it is today. Before a project actually starts its development phase, there is a lot of planning and a lot of POC involved. During these POC's a lot of the discussion is on the language that will be used to implement the concept that they are thinking off. Why do you think such a process exists? The answer is diversity. Just like how you would not build a simple static webpage using something like c++ (even though it is possible) but use javascript, similarly you will have to use golang when the need for it arises. Golang is amazing at In approach one, you are directly calling ...READ MORE I followed here and it is done. Configure ...READ MORE Try this func (t *SimpleChaincode) setDetails(stub shim.ChaincodeStubInterface, args ...READ MORE This is what I used: package main import ( ...READ MORE Python. Don’t even think about it to select ...READ MORE You can use Java Runtime.exec() to run python script, ...READ MORE The only reason for ClassNotFoundException is if ...READ MORE You could try this, myElement .sendKeys(new String[] { ...READ MORE Explanation below. Also, there is a similar ...READ MORE I also faced this issue first time, ...READ MORE OR
https://www.edureka.co/community/42722/go-lang-vs-others-python-java-rust
CC-MAIN-2019-39
en
refinedweb
CMK description and classification Customer master keys (CMKs) are the basic resources of KMS. CMKs are composed of key IDs, basic metadata (such as key state) and key materials used to encrypt and decrypt data. In normal circumstances, KMS generates key material when you perform CreateKey. You can choose to create a key from external key materials. In such a case, KMS does not generate key material for the CMK you create and you can import your own key material to the CMK. You can use DescribeKey to determine the source of the key material. When the Origin in the KeyMetadata is Aliyun_KMS, this indicates that the key material was generated by KMS and can be referred to as the Normal Key. If the Origin is EXTERNAL, this indicates that the key material was imported from an external source and can be referred to as the External Key. Note When you select an external key material source and use the key material you imported, you must note the following: - You must be sure that the random source used to generate the key material complies with requirements. - You must ensure the reliability of the key material. - KMS ensures the high availability of imported key materials, but cannot ensure that imported key material has the same reliability as the key material generated by KMS. - You can directly use the DeleteKeyMaterial operation to delete the imported key material. Or you can set an expiration time to automatically delete the imported key material after it expires (without deleting CMKs). The key material generated by KMS cannot be directly deleted. Instead, you can use the ScheduleKeyDeletion operation to delete the key material along with CMK after 7 to 30 days. - After you delete the imported key material, you can re-import the same key material to make the relevant CMK available again. Therefore, you need to independently save a copy of the key material. - Each CMK can only have one imported key material. Once you import the key material to a CMK, this CMK is bound to this key material. Even if this key material expires or is deleted, you cannot import any other key material for the CMK. If you need to rotate a CMK that uses the external key material, you must create a new CMK and then import the new key material. - CMKs are independent. When you use one CMK to encrypt data, you cannot use another CMK to decrypt the data, even if these CMKs use the same key material. - You can only import 256-bit symmetric keys as key material. How to import key material - Create an external key First, you must create an External Key. To do this, go to the console’s key creation advanced options and select an external key material source, or send a request to the CreateKey API and specify the Origin parameter value as EXTERNAL. By choosing to create an external key, you indicate that your have read and understood the Note and How to import key material sections of this document. Examples aliyuncli kms CreateKey --Origin EXTERNAL --Description "External key" - Get import key material parameters After successfully creating an external key, before you import the key material, you must obtain the import key material parameters. You can obtain the import key material parameters on the console or by sending a request to the GetParametersForImport. The import key material parameters include a public key used to encrypt the key material and an import token. Examples aliyuncli kms GetParametersForImport --KeyId 1339cb7d-54d3-47e0-b595-c7d3dba82b6f --WrappingAlgorithm RSAES_OAEP_SHA_1 --WrappingKeySpec RSA_2048 - Import key material The import key material operation can import key material for external keys that do not yet have key material. It can also re-import key material that has expired or been deleted, or reset the key material expiration time. The import token is bound to the public key used to encrypt key material. A single token can only be used to import the key material for the CMK specified at the time of generation. An import token is valid for 24 hours and can be used multiple times during this period. After the token expires, you must obtain a new import token and public encryption key. - First, use the public encryption key to encrypt the key material. The public encryption key is a 2,048-bit RSA public key. The encryption algorithm used must be consistent with that specified when obtaining the import key material parameters. Because the API returns the public encryption key in base64 encoding, you must first perform base64 decoding when using it. Currently, KMS supports the following encryption algorithms: RSAES_OAEP_SHA_1, RSAES_OAEP_SHA_256, and RSAES_PKCS1_V1_5. After encryption, you must perform base64 encoding on the encrypted key material and then use this, along with the import token as GenerateDataKey parameters to import the key material to KMS. Examples aliyuncli kms ImportKeyMaterial --KeyId 1339cb7d-54d3-47e0-b595-c7d3dba82b6f --EncryptedKeyMaterial xxx --ImportToken xxxx Delete key material - After importing key material, you can use the external key just like a normal key. External keys differ from normal keys in that their key material can expire or be manually deleted. After the key material expires or is deleted, the key will no longer function and ciphertext data encrypted using this key cannot be decrypted unless you re-import the same key material - If a key enters the PendingDeletion state after its key material expires or is deleted, the key state does not change. Otherwise, the key state changes to PendingImport. You can use the console or DeleteKeyMaterial to delete the key material. Examples aliyuncli kms DeleteKeyMaterial --KeyId xxxx Operation examples Use OPENSSL to encrypt and upload key material - Create an external key. - Generate the key material. The key material must be a 256-bit symmetric key. In this example, we use OPENSSL to generate a 32-byte random number. 1.openssl rand -out KeyMaterial.bin 32 - Get import key material parameters. - Encrypt key material. - First, you must perform base64 decoding on the public encryption key. - Then, encrypt the key material using the encryption algorithm (here, we use RSAES_OAEP_SHA_1). - Finally, perform base64 encoding on the encrypted key material and save it as a text file. openssl rand -out KeyMaterial.bin 32 openssl enc -d -base64 -A -in PublicKey_base64.txt -out PublicKey.bin openssl rsautl -encrypt -in KeyMaterial.bin -oaep -inkey PublicKey.bin -keyform DER -pubin -out EncryptedKeyMaterial.bin openssl enc -e -base64 -A -in EncryptedKeyMaterial.bin -out EncryptedKeyMaterial_base64.txt - Upload the encrypted key material and import token. Use JAVA SDK to encrypt and upload key material //Uses the latest KMS JAVA SDK //KmsClient.java import com.aliyuncs.kms.model.v20160120.*; import com.aliyuncs.profile.DefaultProfile; //KMS API encapsulation public class KmsClient { DefaultAcsClient client; public KmsClient( String region_id, String ak, String secret) { DefaultProfile profile = DefaultProfile.getProfile(region_id, ak, secret); this.client = new DefaultAcsClient(profile); } public CreateKeyResponse createKey() throws Exception { CreateKeyRequest request = new CreateKeyRequest(); request.setOrigin("EXTERNAL"); //Creates an external key return this.client.getAcsResponse(request); } //... Omitted, the remaining operations are the same as those in the API method. } //example.java import com.aliyuncs.kms.model.v20160120.*; import KmsClient import java.security.KeyFactory; import java.security.PublicKey; import java.security.spec.MGF1ParameterSpec; import javax.crypto.Cipher; import javax.crypto.spec.OAEPParameterSpec; import javax.crypto.spec.PSource.PSpecified; import java.security.spec.X509EncodedKeySpec; import java.util.Random; import javax.xml.bind.DatatypeConverter; public class CreateAndImportExample { public static void main(String[] args) { String regionId = "cn-hangzhou"; String accessKeyId = "*** Provide your AccessKeyId ***"; String accessKeySecret = "*** Provide your AccessKeySecret ***"; KmsClient kmsclient = new KmsClient(regionId,accessKeyId,accessKeySecret); //Create External Key try { CreateKeyResponse keyResponse = kmsclient.createKey(); String keyId = keyResponse.KeyMetadata.getKeyId(); //Generates a 32-bit random number byte[] keyMaterial = new byte[32]; new Random().nextBytes(keyMaterial); //Gets import key material parameters GetParametersForImportResponse paramResponse = kmsclient.getParametersForImport(keyId,"RSAES_OAEP_SHA_256"); String importToekn = paramResponse.getImportToken(); String encryptPublicKey = paramResponse.getPublicKey(); //Performs base64 decoding on the public encryption key byte[] publicKeyDer = DatatypeConverter.parseBase64Binary(encryptPublicKey); //Parses the RSA public key KeyFactory keyFact = KeyFactory.getInstance("RSA"); X509EncodedKeySpec spec = new X509EncodedKeySpec(publicKeyDer); PublicKey publicKey = keyFact.generatePublic(spec); //Encrypts key material Cipher oaepFromAlgo = Cipher.getInstance("RSA/ECB/OAEPWithSHA-1AndMGF1Padding"); String hashFunc = "SHA-256"; OAEPParameterSpec oaepParams = new OAEPParameterSpec(hashFunc, "MGF1", new MGF1ParameterSpec(hashFunc), PSpecified.DEFAULT); oaepFromAlgo.init(Cipher.ENCRYPT_MODE, publicKey, oaepParams); byte[] cipherDer = oaepFromAlgo.doFinal(keyMaterial); //You must perform base64 encoding on the encrypted key material String encryptedKeyMaterial = DatatypeConverter.printBase64Binary(cipherDer); //Imports key material Long expireTimestamp = 1546272000L; //Unix timestamp, precise to the second, 0 indicates no expiration kmsClient.importKeyMaterial(keyId,encryptedKeyMaterial, expireTimestamp); } catch(Exception e) { //... Omitted } } }
https://www.alibabacloud.com/help/doc-detail/68523.html
CC-MAIN-2019-39
en
refinedweb
1,150Thomas left a reply on Laracast Post Time Issue I remember informing @jeffreyway about this early on but it still has not changed. Guess something is wrong with the timezone offset calculations :) Now I start wondering if there is also an time issue with the time estimate in scheduled videos. Wondering when: will get online. Here it states 35 minute from now (14:25 UTC+2 - Amsterdam). MThomas left a reply on How/Where Should I Code A Dropdown List I think the best advice I can give you is start with a series like: Next move on to series like: or the more recent I’m fairly sure that these series will get you up to speed on how to use Laravel as it is designed. Good luck! MThomas left a reply on Urgent Whoops MThomas left a reply on Casts Casts can be used to convert (cast) a JSON string to a native array, or a mysql boolean (tiny int) to a php boolean (true/false). In order to prevent users from entering invalid data, you need to validate the users input. On a side note, you never should store or handle phone numbers as integers, this will get you in trouble: And remember that php does not play nice with long integers MThomas left a reply on Urgent Whoops @pdc why open a 2 year old post for these kinds of discussions... If you so strongly believe that Laravel is an insecure framework, show some evidence, make security reports or issues on GitHub... in stead of complaining... nobody here forces you to use it. On the topic of Whoops. By default Laravel will disable Whoops for production... you have to enable it manually in order to show on production. MThomas left a reply on Vue.js (Laravel + Vue) Not Working On IPhone. Please elaborate on the issue... What response do you get? Is it a Laravel issue? Have you tried turning on Debug Mode? MThomas left a reply on How To Create Recursive Route? Maybe this package might be of use: MThomas left a reply on AssertSee Videos? Great you got it to work! As a rule of thumb, if you run into an issue, share the code your are working on, in this case the test method, the controller action etc :) Good luck! MThomas left a reply on 401 Unauthenticated In Ajax Login... If you want us to help you, show us the code you have written, what routes you have defined etc. A 404 means that it can't find the requested url, so that might be a start in your debugging. MThomas left a reply on AssertSee Videos? As far as I know you can assert the response contains a certain HTML string using assertSee(). And please realize and accept(!) that you need to provide the forum with information, we don't know what you want etc, only in your last post you mentioned the use of YouTube and iframes, prior to that, it could have been an Laravel model etc... MThomas left a reply on Authentication In An SPA What about And if the default drivers don't cover your use case you can find many others here: And if you want to add your own OAuth server, take a look at MThomas left a reply on Dynamically Create Subdomain I guess, the best way to find out is to give it a try. But as you need to record the subdomain somewhere there is no reason you could not redirect there (its nothing more than a regular redirect). MThomas left a reply on How Can I Use Pagination In Laravel Vuejs? Just the two first google results for: laravel vue pagination: In other words, what did you try, where did you get stuck? MThomas left a reply on Dynamically Create Subdomain Sounds like a Multi tenant approach might help you. This will also enable you to isolate all tenant/company data in seperate databases or tables. Take a look at or for composer packages doing the heavy lifting. If you like to do it yourself take a look at these two great blog posts: MThomas left a reply on Route [admin.categories.index] Not Defined. You namespace and file path are connected :). If it solved your issue, please mark it as the answer, that helps others. MThomas left a reply on Connection Could Not Be Established With Host Smtp.mailtrap.io [Connection Refused #111] Two things, please tell us what you did, how does the code look that invoked the email. Secondly you exposed you Mailtrap SMTP/API username and password, might best to remove them from the post and renew them on Mailtrap's side. MThomas left a reply on Route [admin.categories.index] Not Defined. What is the path of your controller? It should be in app/Http/Controller/Auth/Admin. MThomas left a reply on Route [admin.categories.index] Not Defined. Within the route group you prefix all controllers with ‘Auth/Admin’ and your CaregoriesController is just in App/Http/Controllers and not in App/Http/Controllers/Auth/Admin. So if you want it to work move the file for the Admin directory and update the namespace accordingly. MThomas left a reply on Variable Not Passing To View What url are you visiting? How dis you install Laravel? Are you using Homestead or Valet? MThomas left a reply on Conditional Filtering On Related Model @andersb Not sure what you're asking. You said, that if there is a post, you would like to get all comments not just the comments of the post you're viewing, that is what that query does.. it only uses the timestamp of the post the user is viewing. If the user is not viewing a post, but you just want comments that are a month old, Isn't that not just this: MThomas left a reply on Get Data From 3 Tables With Relationship Laravel @ABDULBAZITH - As mentioned in my earlier comment. Assuming you have created the relationship on your order model. The comments shows you don't have the following relationship: // In your order model public function products() { return $this->hasMany(Product::class); } // In your product model // Or type is even better but then you need to update the eloquent query accordingly public function product_type() { return $this->belongsTo(ProductType::class); } MThomas left a reply on Conditional Filtering On Related Model Not sure if I get it right, but isn't it as simple as: $post = Post::find(123); $comments = Comment::whereDate('created_at', '>', $post->created_at->addMonth()); MThomas left a reply on Hoping For Pointers On CRUD Logging... OK, I get your point. Can you explain why you want to do that? In the case you mention, you could easily resolve the Country's name based on its ID. I'm not entirely sure why you want to log for example that piece of data. Just thinking out of the box and filling in some information. But in case you try to log: "MThomas (Netherlands) updated his profile", you could do something like this: activity() ->performedOn($user) ->causedBy(auth()->user()) ->withProperties(['country_id' => $user->country_id]) ->log("{$user->name} ({$user->country->name}) updated his profile."); If this is not the case, let me try a different idea (have not tested or tried this). Assuming there is not an endless list of fields from related models you'd like to log. Why not create an accessor attribute for the item you like to log, and add it to the luggable attributes: use Illuminate\Database\Eloquent\Model; use Spatie\Activitylog\Traits\LogsActivity; class User extends Model { use LogsActivity; protected $guarded = [*]; protected static $logAttributes = ['name', 'country']; protected static $logOnlyDirty = true; // Only log created and changed fields public function country() { return $this->belongsTo(Country::class); } public function getCountryAttribute() { $this->country->name; } } MThomas left a reply on Get Data From 3 Tables With Relationship Laravel Why not use the GroupBy functionality of Eloquent, assuming your Order model has a product relationship and your Product model a product_type relationship. Order::with(['product.product_type' => function($query){ $query->groupBy('id'); }])->get(); MThomas left a reply on Hoping For Pointers On CRUD Logging... The package will log all or a selection of fields for models (that have a certain trait and implementation) every time you create or update the model. So if you add the trait to all models you want to track, you should be fine. Yes, you might log more that you need, but you could create a artisan job that deletes unnecessary logged items or something like it (if it is really a problem). Otherwise using a standard package that fits 80% of your need vs the downside of building and maintaining your own implementation might not weigh up. MThomas left a reply on Azure AD Authentication In My Laravel Web App MThomas left a reply on Hoping For Pointers On CRUD Logging... Is the thing you are looking for? You should be able to use v1. MThomas left a reply on Azure AD Authentication In My Laravel Web App Any reason you're not using on of the Azure AD packages? For example? -- Extension of Socialite -- Based on middleware MThomas left a reply on Is There A Way To List All Relationships Of A Model? MThomas left a reply on Check If Model Has Changed Since First Save Not build in (as far as I know). But I can highly recommend this package by Spatie: it enables you to track changes for specified attributes. MThomas left a reply on Open A Register Form Only In Sunday MThomas left a reply on Open A Register Form Only In Sunday Even more elegant is: if(now()->->isSunday()) { return view('foo.bar'); } You leverage Carbon and the Laravel helper which makes it even more expressive. MThomas left a reply on Select Inputs Ignored By Validation Are you sure your request data includes the field? Try a return $request->all() before validating to check it is in your request data. If you want to include user input in your validated data array, you can add it without any rules: 'email' => 'required|email', 'data_not_validated' => '', If you want to manually add something to a create method you could do this: $data = $request->validate([ // 'email' => ['required', 'email'], // ]); $data['foo'] = 'bar'; $model = Post::create($data); MThomas left a reply on Select Inputs Ignored By Validation You could use ‘required|in:male,female’ MThomas left a reply on How To Install This IBAN Validator In LARAVEL Why not use this package: This is tailormade for Laravel. MThomas left a reply on Role-based Multi-tenant With Tenant Subscription To Selected Features Yes, the can directive can limit access to resources, and you can also scope queries down to the teams (could even add a global scope for this). With regard to the subscriptions. You could add that functionality to the team model instead of the user model. MThomas left a reply on Role-based Multi-tenant With Tenant Subscription To Selected Features You should take a look at this works great with Laravels permission system. And as you said, combine it with Laravel cashier and you will have the most flexible integration you can wish for. Just asign users roles and teams, and link permissions to roles and teams. You could use the Laravel's gate/authorization features to check the acces to a certain resource based on there presence in a team. MThomas left a reply on Thoughts On Subscribing To Laracasts? If you are still in doubt, why not register for a monthly subscription and find out yourself. I found I extremely useful, and if you take a look around at the forum and the users badges, you will see there are very experienced developers and juniors here. For everyone there will be something useful. MThomas left a reply on How Can I Hide Or Remove Id From Url In Laravel? Take a look at this package, will leverage a lot of work for you: MThomas left a reply on How To Configure Laravel Passport's '/oauth/token' Rate Limit? Isn't this what you are looking for: Route::middleware('auth:api', 'throttle:60,1')->group(function () { Route::get('/user', function () { // }); }); MThomas left a reply on CORS Issue Only On Axios This package might help you: MThomas left a reply on VUE.js | Reload Page On External Server After Update. You need to inform the client (that is where the JavaScript is rendered) that it needs to refresh. You can do this using Broadcasting in Laravel. You’ll need a service like Pusher or Laravel-Websockets to make it work. Those are needed to establish a connection from the client to your server in order to push an update without a trigger from the user. An totally different option is to reload the page (or perform an Ajax request) every x seconde or minutes to your server. MThomas left a reply on Task Scheduling Runs Every Minute Despite My Directions The idea is that the cron job is called every minute. And that you set the timeframe/interval on your jobs. If there is no job set in your code base for the moment the cronjob runs, no jobs will be executed. If there is a job set in the codebase for that moment, it will run. Or have I completely missunderstood your problem? MThomas left a reply on Differents Between Eloquent ORM And Query Builder? They are linked. Eloquent lets you use models to query your database. The query builder is what is used under the hood in Eloquent. And you can use) to query your database directely and build upon eloquent queries. MThomas left a reply on Laravel 5.8 Does Not Use Sqlite For Testing Did you install Laravel Telescope? If so, you need to be sure you have the line below in your phpunit.xml: <env name="TELESCOPE_ENABLED" value="false" /> Somehow having telescope enabled while running tests causes an .env problem. MThomas left a reply on Data Is Not Deleting In Laravel Did you use the Vue component structure and compiled it down? <template> <a href="#" @ <i class="fa fa-trash red"></i> </a> </template> <script> export default { data() { return { } }, methods: { deleteUser(id){ Swal.fire({ title: 'Are you sure?', text: "You won't be able to revert this!", type: 'warning', showCancelButton: true, confirmButtonColor: '#3085d6', cancelButtonColor: '#d33', confirmButtonText: 'Yes, delete it!' }).then((result) => { //send request to the server this.form.delete('api/user/'+id).then(()=>{ Swal.fire( 'Deleted!', 'Your file has been deleted.', 'success' ) }).catch(()=>{ swal("Failed!", "There was something worng.", "warning"); }); }) }, } mounted() { console.log('Component mounted.') } } </script> MThomas left a reply on Logout In Laravel Not Working What did you change? Did you do something with the authentication routes, did you change the logic in the auth scaffold? It must be there since your code seems very similar to the default code in app.blade.php <div class="dropdown-menu dropdown-menu-right" aria- <a class="dropdown-item" href="{{ route('logout') }}" onclick="event.preventDefault(); document.getElementById('logout-form').submit();"> {{ __('Logout') }} </a> <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;"> @csrf </form> </div> Or try reverting back to the code above, the one that is provided in the default layout/app.blade.php file :) MThomas left a reply on Bulk Delete Of Records Where SoftDeletes Is True MThomas left a reply on Bulk Delete Of Records Where SoftDeletes Is True You don't have to pas the delete in forceDelete, you are already in the context of the delete... You could change the inside of your loop to chain the forceDelete method: Post::where('id', $post)->forceDelete(); And assuming that $post is an single ID and $postsToDelete an array of ID's instead of the loop you could do: Post::whereIn('id',$postsToDelete)->forceDelete(); MThomas left a reply on Getting Relation's Related Fields In Laravel Eloquent Model use qty_prices() instead. You need the query builder for the with() method :) $this->qty_prices will give you a collection of all the related prices. $this->qty_prices() will return a query builder object that you can build upon with Eloquent methods like with()
https://laracasts.com/@MThomas
CC-MAIN-2019-39
en
refinedweb
vi_auth_client 0.1.3 ViAuthClient # A library for Dart developers. It is awesome. Usage # A simple usage example: import 'package:ViAuthClient/ViAuthClient: vi_auth_client: ^0.1.3 2. Install it You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:vi_auth_client/vi_auth_client.dart'; We analyzed this package on Sep 13, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.5.0 - pana: 0.12.21 Platforms Detected platforms: web Primary library: package:vi_auth_client/vi_auth_client.dartwith components: html. Health suggestions Fix lib/vi_auth_client.dart. (-1 points) Analysis of lib/vi_auth_client.dart reported 2 hints: line 52 col 7: DO use curly braces for all flow control structures. line 81 col 7: DO use curly braces for all flow control structures. Maintenance issues and suggestions Support latest dependencies. (-10 points) The version constraint in pubspec.yaml does not support the latest published versions for 1 dependency ( http). Package is getting outdated. (-13.97 points) The package was last published 59 weeks ago. Maintain an example. (-10 points) Create a short demo in the example/ directory to show how to use this package. Common filename patterns include main.dart, example.dart, and vi_auth_client.
https://pub.dev/packages/vi_auth_client
CC-MAIN-2019-39
en
refinedweb
This reference guide covers how to use Spring Cloud Kubernetes. 1. Why do you need Spring Cloud Kubernetes? Spring Cloud Kubernetes provide Spring Cloud common interface implementations that consume Kubernetes native services. The main objective of the projects provided in this repository is to facilitate the integration of Spring Cloud and Spring Boot applications running inside Kubernetes. 2. Starters Starters are convenient dependency descriptors you can include in your application. Include a starter to get the dependencies and Spring Boot auto-configuration for a feature set. 3. DiscoveryClient for Kubernetes This project provides an implementation of Discovery Client for Kubernetes. This client lets you query Kubernetes endpoints (see services) by name. A service is typically exposed by the Kubernetes API server as a collection of endpoints that represent http and https addresses and that a client can access from a Spring Boot application running as a pod. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon project to fetch the list of the endpoints defined for an application to be load balanced. This is something that you get for free by adding the following dependency inside your project: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes</artifactId> </dependency> To enable loading of the DiscoveryClient, add @EnableDiscoveryClient to the according configuration or application class, as the following example shows: @SpringBootApplication @EnableDiscoveryClient public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } Then you can inject the client in your code simply by autowiring it, as the following example shows: @Autowired private DiscoveryClient discoveryClient; You can choose to enable DiscoveryClient from all namespaces by setting the following property in application.properties: spring.cloud.kubernetes.discovery.all-namespaces=true If, for any reason, you need to disable the DiscoveryClient, you can set the following property in application.properties: spring.cloud.kubernetes.discovery.enabled=false Some Spring Cloud components use the DiscoveryClient in order to obtain information about the local service instance. For this to work, you need to align the Kubernetes service name with the spring.application.name property. Spring Cloud Kubernetes can also watch the Kubernetes service catalog for changes and update the DiscoveryClient implementation accordingly. In order to enable this functionality you need to add @EnableScheduling on a configuration class in your application. 4. Kubernetes native service discovery Kubernetes itself is capable of (server side) service discovery (see: kubernetes.io/docs/concepts/services-networking/service/#discovering-services). Using native kubernetes service discovery ensures compatibility with additional tooling, such as Istio (istio.io), {service-name}.{namespace}.svc.{cluster}.local:{service-port}. Additionally, you can use Hystrix for: Circuit breaker implementation on the caller side, by annotating the spring boot application class with @EnableCircuitBreaker Fallback functionality, by annotating the respective method with @HystrixCommand(fallbackMethod= 5. Kubernetes PropertySource implementations The most common approach to configuring your Spring Boot application is to create an application.properties or application.yaml or an application-profile.properties or application-profile.yaml file that contains key-value pairs that provide customization values to your application or Spring Boot starters. You can override these properties by specifying system properties or environment variables. 5.1. Using a ConfigMap PropertySource: Apply individual configuration properties. Apply as yamlthe content of any property named application.yaml. Apply as a properties file the content of any property named SPRING_PROFILES_ACTIVE environment variable. To do so, you can launch your Spring Boot application with an environment variable that you can define it in the PodSpec at the container specification. Deployment resource file, as follows: apiVersion: apps/v1 kind: Deployment metadata: name: deployment-name labels: app: deployment-name spec: replicas: 1 selector: matchLabels: app: deployment-name template: metadata: labels: app: deployment-name spec: containers: - name: container-name image: your-image env: - name: SPRING_PROFILES_ACTIVE value: "development". 5.2. Secrets PropertySource: Reading recursively from secrets mounts Named after the application (as defined by spring.application.name) Matching some labels As the case with ConfigMap, more advanced configuration is also possible where you can use multiple Secret instances. The spring.cloud.kubernetes.secrets.sources list makes this possible. For example, you could define the following Secret instances: spring: application: name: cloud-k8s-app cloud: kubernetes: secrets: name: default-name namespace: default-namespace sources: # Spring Cloud Kubernetes looks up a Secret named s1 in namespace default-namespace - name: s1 # Spring Cloud Kubernetes looks up a Secret named default-name in whatever namespace n2 - namespace: n2 # Spring Cloud Kubernetes looks up a Secret named s3 in namespace n3 - namespace: n3 name: s3 In the preceding example, if spring.cloud.kubernetes.secrets.namespace had not been set, the Secret named s1 would be looked up in the namespace that the application runs. Notes: The spring.cloud.kubernetes.secrets.labelsproperty behaves as defined by Map-based binding. The spring.cloud.kubernetes.secrets.pathsproperty behaves as defined by Collection-based binding. Access to secrets through the API may be restricted for security reasons. The preferred way is to mount secrets to the Pod. You can find an example of an application that uses secrets (though it has not been updated to use the new spring-cloud-kubernetes project) at spring-boot-camel-config 5.3. PropertySource Reload: Periodically. 6. Ribbon Discovery in Kubernetes Spring. spring.cloud.kubernetes.ribbon.modesupports PODand SERVICEmodes. The POD mode is to achieve load balancing by obtaining the Pod IP address of Kubernetes and using Ribbon. POD mode uses the load balancing of the Ribbon Does not support Kubernetes load balancing, The traffic policy of Istiois not supported. the SERVICEmode is directly based on the service nameof the Ribbon. Get The Kubernetes service is concatenated into service-name.{namespace}.svc.{cluster.domain}:{port}such as: demo1.default.svc.cluster.local:8080. the SERVICEmode uses load balancing of the Kubernetes service to support Istio’s traffic policy. spring.cloud.kubernetes.ribbon.cluster-domainSet the custom Kubernetes cluster domain suffix. The default value is: 'cluster.local' The following examples use this module for ribbon discovery: 7. Kubernetes Ecosystem Awareness. 7.1. Kubernetes Profile Autoconfig). 7.2. Istio Awareness. 8. Pod Health Indicator Spring Boot uses HealthIndicator to expose info about the health of an application. That makes it really useful for exposing health-related information to the user and makes it a good fit for use as readiness probes. The Kubernetes health indicator (which is part of the core module) exposes the following info: Pod name, IP address, namespace, service account, node name, and its IP address A flag that indicates whether the Spring Boot application is internal or external to Kubernetes 9. Leader Election <TBD> 10. Security Configurations Inside Kubernetes 10.1. Namespace Most" 10.2. Service Account. Depending on the requirements, you’ll need get, list and watch permission on the following resources: For development purposes, you can add cluster-reader permissions to your default service account. On a production system you’ll likely want to provide more granular permissions. The following Role and RoleBinding are an example for namespaced permissions for the default account: kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: YOUR-NAME-SPACE name: namespace-reader rules: - apiGroups: ["", "extensions", "apps"] resources: ["configmaps", "pods", "services", "endpoints", "secrets"] verbs: ["get", "list", "watch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: namespace-reader-binding namespace: YOUR-NAME-SPACE subjects: - kind: ServiceAccount name: default apiGroup: "" roleRef: kind: Role name: namespace-reader apiGroup: "" 11. Service Registry Implementation In Kubernetes service registration is controlled by the platform, the application itself does not control registration as it may do in other platforms. For this reason using spring.cloud.service-registry.auto-registration.enabled or setting @EnableDiscoveryClient(autoRegister=false) will have no effect in Spring Cloud Kubernetes. 12. Examples Spring Examples: the ones located inside this repository. Spring Cloud Kubernetes Full Example: Minions and Boss Spring Cloud Kubernetes Full Example: SpringOne Platform Tickets Service Spring Cloud Gateway with Spring Cloud Kubernetes Discovery and Config Spring Boot Admin with Spring Cloud Kubernetes Discovery and Config 13. Other Resources This section lists other resources, such as presentations (slides) and videos about Spring Cloud Kubernetes. Please feel free to submit other resources through pull requests to this repository. 14. Configuration properties To see the list of all Sleuth related configuration properties please check the Appendix page. 15. Building 15.1. Basic Compile and Test To. 15. 15. 15.3.1. Importing into eclipse with m2eclipse We recommend the m2eclipse eclipse plugin when working with eclipse. If you don’t already have m2eclipse installed it is available from the "eclipse marketplace". 15. 16.. 16. 16.2. Code of Conduct This project adheres to the Contributor Covenant code of conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to [email protected]. 16). 16) 16 16.5. IDE setup 16.
https://cloud.spring.io/spring-cloud-kubernetes/reference/html/
CC-MAIN-2019-39
en
refinedweb
PySpark SparkContext With Examples and Parameters 1. PySpark SparkContext In our last article, we see PySpark Pros and Cons. In this PySpark tutorial, we will learn the concept of PySpark SparkContext. Moreover, we will see SparkContext parameters. Apart from its Parameters, we will also see its PySpark SparkContext examples, to understand it in depth. So, let’s start PySpark SparkContext. Let’s explore best PySpark Books 2. What is SparkContext in PySpark? In simple words, an entry point to any Spark functionality is what we call SparkContext. At the time we run any Spark application, a driver program starts, which has the main function and from this time your SparkContext gets initiated. Afterward, on worker nodes, driver program runs the operations inside the executors. In addition, to launch a JVM, SparkContext uses Py4J and then creates a JavaSparkContext. However, PySpark has SparkContext available as ‘sc’, by default, thus the creation of a new SparkContext won’t work. Do you know about PySpark RDD Operations Here is a code block which has the details of a PySpark class as well as the parameters, those'> ) If these professionals can make a switch to Big Data, so can you: 3. Parameters in PySpark SparkContext Further, we are listing all the parameters of a SparkContext in PySpark: a. Master This is the URL of the cluster it connects to. b. appName Basically, “appName” parameter refers to the name of your job. c. SparkHome Generally, sparkHome is a Spark installation directory. d. pyFiles Files like .zip or .py files are to send to the cluster and to add to the PYTHONPATH. e. Environment Worker nodes environment variables. f. BatchSize Basically, as a single Java object, the number of Python objects represented. However, to disable batching, Set 1, and to automatically choose the batch size based on object sizes set 0, Also, to use an unlimited batch size, set -1. g. Serializer So, this parameter tell about Serializer, an RDD serializer. h. Conf Moreover, to set all the Spark properties, an object of L{SparkConf} is there. i. Gateway Basically, use an existing gateway as well as JVM, else initialize a new JVM. Have a look at PySpark Broadcast and Accumulator j. JSC However, JSC is the JavaSparkContext instance. k. profiler_cls Basically, in order to do profiling, a class of custom Profiler is used. Although, make sure the pyspark.profiler.BasicProfiler is the default one. So, master and appname are mostly used, among the above parameters. However, any PySpark program’s first two lines look as shown below − from pyspark import SparkContext sc = SparkContext("local", "First App1") 4. SparkContext Example – PySpark Shell Since we have learned much about PySpark SparkContext, now let’s understand it with an example. Here we will count the number of the lines with character ‘x’ or ‘y’ in the README.md file. So, let’s assume that there are 5 lines in a file. Hence, 3 lines have the character ‘x’, then the output will be → Line with x: 3. However, for character ‘y’, same will be done . You must check how much you know about Pyspark However, make sure in the following PySpark SparkContext example we are not creating any SparkContext object it is because Spark automatically creates the SparkContext object named sc, by default, at the time PySpark shell starts. So, If you try to create another SparkContext object, following error will occur – “ValueError: That says, it is not possible to run multiple SparkContexts at once”. <<< logFile = "" <<< logData = sc.textFile(logFile).cache() <<< numXs = logData.filter(lambda s: 'x' in s).count() <<< numYs = logData.filter(lambda s: 'y' in s).count() <<< print "Lines with x: %i, lines with y: %i" % (numXs, numYs) Lines with x: 62, lines with y: 30 5. SparkContext Example – Python Program Further, using a Python program, let’s run the same example. So, create a Python file with name firstapp1.py and then enter the following code in that file. ----------------------------------------firstapp1.py--------------------------------------- from pyspark import SparkContext logFile = "" sc = SparkContext("local", "first app") logData = sc.textFile(logFile).cache() numXs = logData.filter(lambda s: 'x' in s).count() numYs = logData.filter(lambda s: 'y' in s).count() print "Lines with x: %i, lines with y: %i" % (numXs, numYs) ----------------------------------------firstapp1.py--------------------------------------- Then to run this Python file, we will execute the following command in the terminal. Hence, it will give the same output as above: Let’s discuss PySpark SparkFiles $SPARK_HOME/bin/spark-submit firstapp1.py Output: Lines with x: 62, lines with y: 30 6. Conclusion Hence, we have seen the concept of PySpark SparkContext. Moreover, we have seen all the parameters for in-depth knowledge. Also, we have seen PySpark SparkContext examples to understand it well. However, if any doubt occurs, feel free to ask in the comment tab. Though we assure we will respond. See also – PySpark Interview Questions For reference
https://data-flair.training/blogs/pyspark-sparkcontext/
CC-MAIN-2019-39
en
refinedweb
l installed skiasharp.views nuget package in my project and l am trying to add the using directive skiasharp.views.forms and l get the error type or namespace not found. if l then decide to use just "using skiasharp" l get the same error when trying to instantiate a skcanvasview object. kindly help. Answers Install the Skiasharp.Views.Forms nuget package: If you have installed the SkiaSharp.Views.Formsnuget package and using SkiaSharp.Views.Forms;, but you still get the Skiasharp.views.forms not found, you can delete all of the binand objfolders in your project, rebuild your project, @LeonLu when l install skiasharp.views.forms l get the notification that skiasharp.views added with warnings. and when l try to add the using directive "using skiasharp.views.forms" intelisense only provide the following options Desktop GTK wfp.. Is this ok and which should l choose if its ok If you want to use it in xamarin forms, we add following two nuget packages @LeonLu l have installed both but l am getting the same error as before @LeonLu any ideas on how to solve my issue.Below is the exact warning l have on skiasharp.views.forms nuget package after installation Package 'Skiasharp.Views. 16.8.0' was restored using '.NetFramework,Version=v4.6.1' instead of the project target framework '.NetStandard,Version=v2.0'. This package may not be fully compatible with your project. Please open the Manage Nuget packages, update other nuget packckages to the latest, please do the same work in .android and .IOS project.
https://forums.xamarin.com/discussion/comment/386600
CC-MAIN-2019-39
en
refinedweb
These are chat archives for Makuna/NeoPixelBus @Makuna Since i got rid of all my initialised variables, and put the include for neopixelbus first, I've not had any (only when accessing a web page) green flickering. ie. it is totally acceptable. However, if i move the include, it goes totally to shit and i get a constantly flickering green pixel. Your latest changes do not help that. I can confirm that using a level shifter allows you to dim the pixels right down to almost nothing. I build a prototype that did not level shift, and it is fine for high output, but when you start to dim to quite low, say under 25%, the strip would go totally spastic. The whole strip would flash white, bits would light up. I'm using an 74HCT245N and that totally fixes it, i can dim until it is barely even visible with no flashing. so IMHO worth having. I'm trying to make a library for the HSV colors, that includes your Rgbcolor lib and it compatible with it. I've made progress but I'm stuck on how to include functions in the cpp file that are only for the lib itself. i've scanned a few .h .cpp but not found an example. there are a few functions such as 3waymin+max and a map function that works for double that i need. i guess i can just put them in manually but it might be good to know how to do it. here is what i have HsvColor HsvColor::RgbToHsv(RgbColor colour) { byte r = colour.R; byte g = colour.G; byte b = colour.B; double rd = (double) r/255; double gd = (double) g/255; double bd = (double) b/255; double max = threeway_max(rd, gd, bd), min = threeway_min(rd, gd, bd); double h, s, v = max; double d = max - min; s = max == 0 ? 0 : d / max; if (max == min) { h = 0; // achromatic } else { if (max == rd) { h = (gd - bd) / d + (gd < bd ? 6 : 0); } else if (max == gd) { h = (bd - rd) / d + 2; } else if (max == bd) { h = (rd - gd) / d + 4; } h /= 6; } return HsvColor(h,s,v); } // helper modules... these have to go in private declarations... NOT WORKING... mmmm double threeway_max(double a, double b, double c) { return max(a, max(b, c)); } double threeway_min(double a, double b, double c) { return min(a, min(b, c)); } double map_double(double x, double in_min, double in_max, double out_min, double out_max) { return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min; } HsvColor::HsvColor(RgbColor colour) {
https://gitter.im/Makuna/NeoPixelBus/archives/2015/06/09?at=55771633813c577e1cf5c97a
CC-MAIN-2019-39
en
refinedweb
- Part 1 - Introduction and setup - Part 2 - Writing your first piece of Go (this post) - Part 3 - Interacting with JavaScript from Go - Part 4 - Sending a response to JavaScript - Part 5 - Compiling Go with webpack - Part 6 - Go, WASM, React and TypeScript Hello WASM, Go style You’ve got your Golang dev environment setup and now it’s time to put it to good use. We’re going to start really basic and write what amounts to a Hello World code: package main import "fmt" func main() { fmt.Println("Hello WASM from Go!") } Well… that’s not particularly exciting, but let’s break it down to understand just what we’re doing here (after all, I’m expecting this might be your first time looking at Go). package main Here’s how we initialise our Go application, we define a main package which becomes our entry point. This is what the Go runtime will look for when it starts up so it knows where the beginning is. Think of it like class Program in C# for a console application. Side note: I just said “our Go application”, and that’s something that you need to think differently about with Go + WASM, we’re not just writing a bunch of random files that we talk to from the browser, we’re building an application that we compile specifically to run in the WASM virtual machine. This will make a bit more sense as we go along. import "fmt" This is how Go brings in external packages that we want to work with. In this case I’m pulling in the fmt package from Go’s standard library that gives us something to work with later on. It’s like open System in F#, using System in C#, or import foo from 'bar'; in JavaScript. Like F# & C# we only open a package, we don’t assign the exports of the package local variable if we don’t want to. If we wanted to import multiple packages we can either have multiple import statements or write something like this: import ( "fmt" "strconv" ) Side note: We’re not ready to get too complex with packages, but if you want to know more check out this article. Finally we create a function: func main() { fmt.Println("Hello WASM from Go!") } We’ve named our function main and given it no arguments, which is important, because this is the entry point function in our main package that the Go runtime looks for. Again, it’s like static void Main(string[] args) in a C# console application. Next we’re using the fmt package we imported and the public member of it Println to… print a string to standard out. Run Go, Run! It’s time to test our code, we’ll use the go run command for that: ~/tmp> go run main.go Hello WASM from Go! Yay we’ve created and run some Go code, but we’ve run it on a command line, not in a browser, and after all, we’re trying to make WASM, and for that we can’t use go run, we’ll need go build. But if we were to just straight up run go build it will output a binary file for the OS/architecture you are currently working with, which is OK if you’re building an application to run on a device, but not for creating WASM binaries. For that we need to override the OS and architecture that we’re compiling for. Building Go for WASM Conveniently Go allows you to specify environment variables to override system defaults, and for that we need to set GOOS=js and GOARCH=wasm to specify that the target OS is JavaScript and the architecture is WASM. ~/tmp> GOOS=js GOARCH=wasm go build -o main.wasm main.go And now we’ll have a file main.wasm that lives in the directory we output to. But how do we use it? A Quick WebAssembly Primer For over 20 years we’ve had JavaScript in the browser as a way to run code on the web. WASM isn’t meant to be a replacement for JavaScript, in fact you’re really hard pressed to use it without writing (or at least executing) a little bit of JavaScript. This is because WebAssembly introduces a whole new virtual machine into the browser, something that has a very different paradigm to JavaScript and is a lot more isolated from the browser, and importantly user space. WebAssembly executed pre-compiled code and is not dynamic like JavaScript in the way it can run. Side note: There are some really great docs on MDN that covers WebAssembly, how it works, how to compile C/C++/Rust to WASM, the WebAssembly Text Format and all that stuff. If you really want to understand WASM have a read through that, in particular the WebAssembly Text Format is very good at explaining how it works. So before we can use our WASM binary we need to create a WASM module and instantiate the runtime space that WASM will run within. To do this we need to get the binary and instantiate it with WASM. MDN covers this in detail but you can do it either synchronously or asynchronously. We’ll stick with async for our approach as it seems to be the recommended way going forward. And the code will look like this: async function bootWebAssembly() { let imports = {}; let result = await WebAssembly.instantiateStreaming(fetch('/path/to/file.wasm'), imports); result.instance.exports.doStuff(); } bootWebAssembly(); Don’t worry about the imports piece yet, we’ll cover that in our next chapter. We’ve used fetch to download the raw bytes of our WASM file which is passed to WebAssembly and it will create your runtime space. This then gives us an object that has an instance (the runtime instance) that exports functions from our WebAssembly code (C/C++/Rust/etc). At least, this is how works in an ideal world, it seems that Go’s approach is a little different. Booting our Go WASM output Now that we understand how to setup WebAssembly let’s get our Go application going. As I mentioned Go is a little different to the example above and that’s because Go is more about running an application than creating some arbitrary code in another language that we can execute from JavaScript. Instead with Go we have a bit of a runtime wrapper that ships with Go 1.11+ called wasm_exec.js and you’ll find it in: ~/tmp> ls $"(go env GOROOT)/misc/wasm/wasm_exec.js" Copy this file into the folder with you main.wasm, we’re going to need it. Next we’ll create a webpage to run the JavaScript: <html> <head> <meta charset="utf-8"> <script src="wasm_exec.js"></script> <script> async function init() { const go = new Go(); let result = await WebAssembly.instantiateStreaming(fetch("main.wasm"), go.importObject) go.run(result.instance); } init(); </script> </head> <body></body> </html> Finally we’ll host the code somewhere, you can use any webserver that you want, goexec, http-server, IIS, etc. Note: Make sure your server supports the WASM mime type of application/wasm. Fire it up, launch the browser and open the dev tools, now you should see the result of fmt.Println there! Woo! Did you guess that we’d see it in the console? I bet you did, after all, that’s the thing most akin to standard out in the browser! Go’s WASM Runtime As you’ll see in the little HTML snippet above the was we start WASM for Go is a little different, first we create a Go runtime with new Go(), which is provided to us by wasm_exec.js. This then provides us with an importObject to pass to the instantiateStreaming function and the result we get back we pass back to the runtimes run method. This is because Go does a bit of funky stuff to treat the WASM binary as an application rather than arbitrary functions like others do. Over the rest of this series we’ll explore this a bit more too. Conclusion There you have it folks, we’ve created our first bit of WASM code using Go, created some browser assets, executed it in the browser and seen an output message. We’ve also learnt a little bit about how WASM works and how it’s isolated from the JavaScript environment, and what makes the approach with Go a little different to other WASM examples you’ll find on the web. But our application is isolated, tune in next time and we’ll start looking at how to interact with JavaScript from WASM.
https://www.aaron-powell.com/posts/2019-02-05-golang-wasm-2-writing-go/?utm_campaign=The%20Go%20Gazette&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2019-39
en
refinedweb
Hi This might be a very stupid question but i have read all i can find on this and i still cant get it to work. In a Custom ViewCell i have a image. this image need to change (red cross/green check-mark) based on a bool property that is in the object the cell is displaying. I am sure its just me that don't understand the hole DataBinding properly, but i hope someone can push me in the right direction i have no problems binding to the string and int property's to labels in the Cell. I have tried: a horrible hack, where i bind the IsToggled from a Switch and then in "OnAppering" use "if(_Switch.IsToggled)" - didn't work. to implement the property in a BindableObjectobject, but then i am stranded in how to bind the bool in the ViewCell class : public class ActivityDto : BindableObject { ... public static readonly BindableProperty ReadyProperty = BindableProperty.Create("Ready", typeof(bool), typeof(ActivityDto), false); public bool Ready { get { return (bool)GetValue(ReadyProperty); } set { SetValue(ReadyProperty, value); } } } public class ActivityCell : ViewCell { public Image readyImg; public ActivityCell() { readyImg = new Image(); if (???) { //set source } else { //set source } You have to bind the Image.SourceProperty to the Boolean property, then use a IValueConverter to convert from Boolean to Source Answers You have to bind the Image.SourceProperty to the Boolean property, then use a IValueConverter to convert from Boolean to Source AH! @AlessandroCaliaro thank you so much! Works like a charm.
https://forums.xamarin.com/discussion/comment/270199/
CC-MAIN-2019-47
en
refinedweb
To Run Tests in the BAT Playground The BAT playground is a web application you can use to familiarize yourself with Behavior Driven Development (BDD) functions. The playground runs tests on the Deck of Cards API by default, but you can tweak commands to experiment and learn about BDD functions by running tests on your API using the BAT playground. In this procedure, you modify the default test, and instead, test the JSONPlaceholder API you designed earlier in this workflow. Start the BAT playground by going to the following URL: Familiarize yourself with Behavior Driven Development (BDD) functions by experimenting with commands using your API in the BAT playground. For example, try altering Step 3, Setting an id with HashMap: Replace the URL in the GET function with the URL of the JSONPlaceholder API that you created in a previous workflow step. For example: FROM: GET `` TO: GET `` Change line 11 as follows: FROM: $.response.body.remaining mustEqual 52 // ←-- And a boolean assertion TO: $.response.body.id[1] mustEqual 2 // ←-- And a boolean assertion Replace the context.set statements to assert that the ID at index position 2 in the array is 3: FROM: context.set('deck_id', $.response.body.deck_id), // ←-- Setting deck_id TO: $.response.body.id[2] mustEqual 3, // ←-- And a boolean assertion Add these lines to assert that the fourth name in the array is Karianne: context.set('username', $.response.body.username[3]), $.response.body.username[3] mustMatch goodName, Step 3 now looks like this: import * from bat::BDD import * from bat::Assertions var goodName = "Karianne" var context = bat::Mutable::HashMap() // <--- First, the HashMap --- describe `Deck of cards` in [ // Then we get a new deck of cards GET `` with {} assert [ $.response.status mustEqual 200, // <--- Then a status assertion $.response.mime mustEqual "application/json", // <--- And a MIME type assertion $.response.body.id[2] mustEqual 3, // <--- And a boolean assertion ] execute [ context.set('username', $.response.body.username[3]), $.response.body.username[3] mustMatch goodName, log($.response) // <--- Then log the response ] ] Click Run Test. The color-coded results of the test appear. As you experiment in the BAT playground, hints appear when errors occur. Next, schedule testing and monitoring.
https://docs.mulesoft.com/api-functional-monitoring/bat-playground-task
CC-MAIN-2019-47
en
refinedweb
4,7 Arrow Function Inside A If Statement As @tykus has pointed out, you need to declare your arrow function. Using your example... did_we_find_item_to_remove ="no"; is_thier_pagination = "no"; let myArrowFunction = () => console.log("We are in"); if(did_we_find_item_to_remove != "yes" && is_thier_pagination != "yes") { myArrowFunction(); } else { console.log("We are out"); } Commented on Eloquent Subquery Additions Thanks for the lesson. Replied to Weird 'Function Name Must Be A String' With Auth:api Replied to Laravel Passport: 'Function Name Must Be A String' Error On trying to reproduce the error on a separate project, I figured out what the issue was; I missed to properly register the CheckClientCredentials::class middleware in app/Http/Kernel.php. Everything is now working perfectly. Replied to Vue.js Vs Angular: Replied to Weird 'Function Name Must Be A String' With Auth:api Started a new Conversation Laravel Passport: 'Function Name Must Be A String' Error I have a working implementation of Laravel Passport that has only uses Password Grant tokens for authentication of users. So on trying to implement Client Credentials Grant for server-to-server authentication, the client middleware throws a weird error I have never encountered before: "Function name must be a string" whenever I apply it on any route I want to protect. "message": "Function name must be a string", "exception": "Symfony\Component\Debug\Exception\FatalThrowableError", "file": "/var/www/html/mochange/project.test/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 152, Replied to Weird 'Function Name Must Be A String' With Auth:api @matthewh Did you find a solution? I am getting this error as well when I tried to implement machine-to-machine (server-to-server) authentication (using client credentials) . In my case the auth:api middleware works perfectly fine, but the client middleware throws the same error described above. Replied to Tokenmismatchexception With Script @THINKINGMAN - You are welcome. Replied to Laravel Passport Test You don't need to add any controllers. Before diving into Laravel passport testing, you need to be familiar with writing tests in Laravel. Replied to Prevent Factory Duplication For Models Relationships Yes, there is another way. Modify your Seeder as shown below: use App\Thread; use App\Tag; factory(Thread::class, 60)->create(); Thread::all()->each(function ($thread){ factory(Tag::class, 10)->create(['thread_id'=> $thread->id]); }); Replied to Tokenmismatchexception With Script First, don't encrypt the csrf_token(). Pass it exactly the way it is generated. Next, I can see in your script you are setting the X-XSRF-TOKEN header. Have you tried using the X-CSRF-TOKEN header instead? Replied to Relationship HasMany() Dosen't Work Replied to Project Not Being Created Replied to Project Not Being Created What do you want the name of your project folder/directory to be? Replied to Passport With Cookies And CSRF Protection. Sounds like you are trying to consume your API with JavaScript. Replied to Relationship HasMany() Dosen't Work Replied to Insert Multiple Data Into Database @EMFINANGA - Say for example a customer can have many orders. Your orders table should have the relevant column to accommodate this relationship then you'll define the relationship and it's inverse in the respective models. I have used the customers to represent your customer_details table; customers - id - name ... orders - id - customer_id ... Customer model class <?php namespace App; use Illuminate\Database\Eloquent\Model; class Customer extends Model { protected $guarded; public function orders() { return $this->hasMany(Order::class, 'customer_id', 'id'); } } Order model class <?php namespace App; use Illuminate\Database\Eloquent\Model; class Order extends Model { protected $guarded = []; public function customer() { return $this->belongsTo(Customer::class, 'customer_id', 'id'); } } When a customer makes an order, you need to retrieve the relevant customer model (record) and create a related order model (record). Example; <?php namespace App\Http\Controllers; use App\Order; use App\Customer; use Illuminate\Http\Request; class OrdersController extends Controller { public function store(Request $request) { $customer = Customer::find(1); Order::create([ 'customer_id' => $customer->id, ... ]); } } Replied to Insert Multiple Data Into Database If I get your question, you need to use a relationship between the two tables. Replied to Laravel Csrf Token Mismatch In POST Request With URL @CRAZYLIFE - You are welcome. Replied to Laravel Csrf Token Mismatch In POST Request With URL @CRAZYLIFE - What's the full URL? Replied to Laravel Csrf Token Mismatch In POST Request With URL @CRAZYLIFE - Then you will need to use the second solution I have suggested above (exclude /response from CSRF protection) Replied to Laravel Csrf Token Mismatch In POST Request With URL If you don't want to pass the token then exclude /response from CSRF protection in /app/Http/Middleware/VerifyCsrfToken.php file; /** * The URIs that should be excluded from CSRF verification. * * @var array */ protected $except = [ 'response', ]; or protected $except = [ '', ]; Replied to Laravel Csrf Token Mismatch In POST Request With URL
https://laracasts.com/@TOKOIWESLEY
CC-MAIN-2019-47
en
refinedweb
03 May 2016 1 comment Python, Web development, Django, Mozilla ThreadedRequestsHTTPTransporttransport class to send Google Analytics pageview trackings asynchronously to Google Analytics to collect pageviews that aren't actually browser pages. We have an API on our Django site that was not designed from the ground up. We had a bunch of internal endpoints that were used by the website. So we simply exposed those as API endpoints that anybody can query. All we did was wrap certain parts carefully as to not expose private stuff and we wrote a simple web page where you can see a list of all the endpoints and what parameters are needed. Later we added auth-by-token. Now the problem we have is that we don't know which endpoints people use and, as equally important, which ones people don't use. If we had more stats we'd be able to confidently deprecate some (for easier maintanenace) and optimize some (to avoid resource overuse). Our first attempt was to use statsd to collect metrics and display those with graphite. But it just didn't work out. There are just too many different "keys". Basically, each endpoint (aka URL, aka URI) is a key. And if you include the query string parameters, the number of keys just gets nuts. Statsd and graphite is better when you have about as many keys as you have fingers on one hand. For example, HTTP error codes, 200, 302, 400, 404 and 500. Also, we already use Google Analytics to track pageviews on our website, which is basically a measure of how many people render web pages that have HTML and JavaScript. Google Analytic's UI is great and powerful. I'm sure other competing tools like Mixpanel, Piwik, Gauges, etc are great too, but Google Analytics is reliable, likely to stick around and something many people are familiar with. So how do you simulate pageviews when you don't have JavaScript rendering? The answer; using plain HTTP POST. (HTTPS of course). And how do you prevent blocking on sending analytics without making your users have to wait? By doing it asynchronously. Either by threading or a background working message queue. If you have a message queue configured and confident in its running, you should probably use that. But it adds a certain element of complexity. It makes your stack more complex because now you need to maintain a consumer(s) and the central message queue thing itself. What if you don't have a message queue all set up? Use Python threading. To do the threading, which is hard, it's always a good idea to try to stand on the shoulder of giants. Or, if you can't find a giant, find something that is mature and proven to work well over time. We found that in Raven. Raven is the Python library, or "agent", used for Sentry, the open source error tracking software. As you can tell by the name, Raven tries to be quite agnostic of Sentry the server component. Inside it, it has a couple of good libraries for making threaded jobs whose task is to make web requests. In particuarly, the awesome ThreadedRequestsHTTPTransport. Using it basically looks like this: import urlparse from raven.transport.threaded_requests import ThreadedRequestsHTTPTransport transporter = ThreadedRequestsHTTPTransport( urlparse.urlparse(''), timeout=5 ) params = { ...more about this later... } def success_cb(): print "Yay!" def failure_cb(exception): print "Boo :(" transporter.async_send( params, headers, success_cb, failure_cb ) The call isn't very different from regular plain old requests.post. This is probably the most exciting part and the place where you need some thought. It's non-trivial because you might need to put some careful thought into what you want to track. Your friends is: This documentation page There's also the Hit Builder tool where you can check that the values you are going to send make sense. Some of the basic ones are easy: Just set to v=1 That code thing you see in the regular chunk of JavaScript you put in the head, e.g tid=UA-1234-Z Optional word you call this type of traffic. We went with ds=api because we use it to measure the web API. The user ones are a bit more tricky. Basically because you don't want to accidentally leak potentially sensitive information. We decided to keep this highly anonymized. A random UUID (version 4) number that identifies the user or the app. Not to be confused with "User ID" which is basically a string that identifies the user's session storage ID or something. Since in our case we don't have a user (unless they use an API token) we leave this to a new random UUID each time. E.g. cid=uuid.uuid4().hex This field is not optional. Some string that identifies the user but doesn't reveal anything about the user. For example, we use the PostgreSQL primary key ID of the user as a string. It just means we can know if the same user make several API requests but we can never know who that user is. Google Analytics uses it to "lump" requests together. This field is optional. Next we need to pass information about the hit and the "content". This is important. Especially the "Hit type" because this is where you make your manually server-side tracking act as if the user had clicked around on the website with a browser. Set this to t=pageview and it'll show up Google Analytics as if the user had just navigated to the URL in her browser. It's kinda weird to do this because clearly the user hasn't. Most likely she's used curl or something from the command line. So it's not really a pageview but, on our end, we have "views" in the webserver that produce information to the user. Some of it is HTML and some of it is JSON, in terms of output format, but either way they're sending us a URL and we respond with data. The full absolute URL of that was used. E.g.. So in our Django app we set this to dl=request.build_absolute_uri(). If you have a site where you might have multiple domains in use but want to collect them all under just 1 specific domain you need to set dh=example.com. I actually don't know what the point of this is if you've already set the "Document location URL". In Google Analytics you can view your Content Drilldown by title instead of by URL path. In our case we set this to a string we know from the internal Python class that is used to make the API endpoint. dt='API (%s)'%api_model.__class__.__name__. There are many more things you can set, such as the clients IP, the user agent, timings, exceptions. We chose to NOT include the user's IP. If people using the JavaScript version of Google Analytics can set their browser to NOT include the IP, we should respect that. Also, it's rarely interesting to see where the requests for a web API because it's often servers' curl or requests that makes the query, not the human. Going back to the code example mentioned above, let's demonstrate a fuller example: import urlparse from raven.transport.threaded_requests import ThreadedRequestsHTTPTransport transporter = ThreadedRequestsHTTPTransport( urlparse.urlparse(''), timeout=5 ) # Remember, this is a Django, but you get the idea domain = settings.GOOGLE_ANALYTICS_DOMAIN if not domain or domain == 'auto': domain = RequestSite(request).domain params = { 'v': 1, 'tid': settings.GOOGLE_ANALYTICS_ID, 'dh': domain, 't': 'pageview, 'ds': 'api', 'cid': uuid.uuid4().hext, 'dp': request.path, 'dl': request.build_request_uri(), 'dt': 'API ({})'.format(model_class.__class__.__name__), 'ua': request.META.get('HTTP_USER_AGENT'), } def success_cb(): logger.info('Successfully informed Google Analytics (%s)', params) def failure_cb(exception): logger.exception(exception) transporter.async_send( params, headers, success_cb, failure_cb ) The class we're using, ThreadedRequestsHTTPTransport has, as you might have seen, a method called async_send. There's also one, with the exact same signature, called sync_send which does the same thing but in a blocking fashion. So you could make your code look someting silly like this: def send_tracking(page_title, request, async=True): # ...same as example above but wrapped in a function... function = async and transporter.async_send or transporter.sync_send function( params, headers, success_cb, failure_cb ) And then in your tests you pass in async=False instead. But don't do that. The code shouldn't be sub-serviant to the tests (unless it's for the sake of splitting up monster-long functions). Instead, I recommend you mock the inner workings of that ThreadedRequestsHTTPTransport class so you can make the whole operation synchronous. For example... import mock from django.test import TestCase from django.test.client import RequestFactory from where.you.have import pageview_tracking class TestTracking(TestCase): @mock.patch('raven.transport.threaded_requests.AsyncWorker') @mock.patch('requests.post') def test_pageview_tracking(self, rpost, aw): def mocked_queue(function, data, headers, success_cb, failure_cb): function(data, headers, success_cb, failure_cb) aw().queue.side_effect = mocked_queue request = RequestFactory().get('/some/page') with self.settings(GOOGLE_ANALYTICS_ID='XYZ-123'): pageview_tracking('Test page', request) # Now we can assert that 'requests.post' was called. # Left as an exercise to the reader :) print rpost.mock_calls This is synchronous now and works great. It's not finished. You might want to write a side effect for the requests.post so you can have better control of that post. That'll also give you a chance to potentially NOT return a 200 OK and make sure that your failure_cb callback function gets called. One thing I was very curious about when I started was to see how it worked if you really ran this for reals but without polluting your real Google Analytics account. For that I built a second little web server on the side, whose address I used instead of. So, change your code so that is not hardcoded but a variable you can change locally. Change it to and start this little Flask server: import time import random from flask import Flask, abort, request app = Flask(__name__) app.debug = True @app.route("/", methods=['GET', 'POST']) def hello(): print "- " * 40 print request.method, request.path print "ARGS:", request.args print "FORM:", request.form print "DATA:", repr(request.data) if request.args.get('sleep'): sec = int(request.args['sleep']) print "** Sleeping for", sec, "seconds" time.sleep(sec) print "** Done sleeping." if random.randint(1, 5) == 1: abort(500) elif random.randint(1, 5) == 1: # really get it stuck now time.sleep(20) return "OK" if __name__ == "__main__": app.run() Now you get an insight into what gets posted and you can pretend that it's slow to respond. Also, you can get an insight into how your app behaves when this collection destination throws a 5xx error. Google Analytics is tricky to test in that they collect all the stuff they collect then they take their time to process it and it then shows up the next day as stats. But, there's a hack! You can go into your Google Analytics account and click "Real-Time" -> "Overview" and you should see hits coming in as you're testing this. Obviously you don't want to do this on your real production account, but perhaps you have a stage/dev instance you can use. Or, just be patient :) Follow @peterbe on Twitter cool
https://api.minimalcss.app/plog/ga-pageviews-on-non-web
CC-MAIN-2019-47
en
refinedweb
This article shows you how to use feature Add REST API Client in Visual Studio 2017. To practice demo, you should read How to implement swagger ui with web api in asp.net mvc, the article shows you how to create web api in visual studio 2017. Next, create a new windows forms project and design a simple UI as shown below. Right click on your project->Add->REST API Client..., enter your Swagger URL and Client namespace, then click the OK button to generate the REST API client. Swagger is an agile technology standard that enables the discovery of REST APIs, providing a way for any software to determine the features of a REST API. Click OK button to generate your API Client You can see your api client automatically generate in your project. Add code to handler load button event private void btnLoad_Click(object sender, EventArgs e) { TestServiceClient client = new TestServiceClient(new Uri(""), new AnonymousCredential()); //var headers = client.HttpClient.DefaultRequestHeaders; //string accessToken = ""; //headers.Remove("Authorization"); //headers.Add("Authorization", $"Bearer {accessToken}"); CustomerOperations operations = new CustomerOperations(client); customerBindingSource.DataSource = operations.GetCustomers(); } Create an AnonymousCredential class inheriting from ServiceClientCredentials public class AnonymousCredential : ServiceClientCredentials { } Press F5 to run your project
https://c-sharpcode.com/thread/how-to-use-feature-rest-api-client-in-visual-studio/
CC-MAIN-2019-47
en
refinedweb
External Data Exchange Attribute Class Definition Warning This API is now obsolete. Marks an interface as a local service interface. This class cannot be inherited. public ref class ExternalDataExchangeAttribute sealed : Attribute [System.AttributeUsage(System.AttributeTargets.Interface, AllowMultiple=false, Inherited=false)] [System.Obsolete("The System.Workflow.* types are deprecated. Instead, please use the new types from System.Activities.*")] public sealed class ExternalDataExchangeAttribute : Attribute type ExternalDataExchangeAttribute = class inherit Attribute Public NotInheritable Class ExternalDataExchangeAttribute Inherits Attribute - Inheritance - - Attributes - Examples. [ExternalDataExchangeAttribute()] public interface IStartPurchaseOrder { event EventHandler<InitiatePOEventArgs> InitiatePurchaseOrder; } Remarks Warning This material discusses types and namespaces that are obsolete. For more information, see Deprecated Types in Windows Workflow Foundation 4.5.. public interface IInterfaceName<TCommand> { void MethodName(TCommand Request); }
https://docs.microsoft.com/en-us/dotnet/api/system.workflow.activities.externaldataexchangeattribute?redirectedfrom=MSDN&view=netframework-4.8
CC-MAIN-2019-47
en
refinedweb
NULL vs Empty Gokuldroid ・4 min read originally published in codefromdude.com I tried to book a ticket in irctc. while giving CCV of my debit card (*mandatory field), accidentally I gave empty value and proceeded to book a ticket. It threw an error after refreshing the whole page as the validation happened (or something went wrong) in the backend. they might have handled in frontend itself. Because of this, I lost my booking window. They might handle millions of requests per day. but they simply failed at this. It should be a common-sense for a developer to handle null vs empty. To err is human. but repeating history is not acceptable. I've used a good number of programming languages (Typescript, Java, Kotlin, Ruby, Javascript, Python, C++, C, PHP). There are days I assumed the user is not dumb enough to give empty values for username or password. I haven't developed a real-world software back then. Handling null and empty is always an art. Some programming languages are good at that. Some of them shout like Kotlin 'Yes! we support null safety in our language'. I am not going to give you the definition of null or empty. Do some research and fight with your colleagues about null vs Empty. You will hear some interesting story. I will throw some snippets from the languages I've used so far. Different languages handle them slightly differently. You will get a better understanding of null vs Empty argument once you know about it. Let's take java first, public void demo(String[] args) { Integer count; int countInner; System.out.println(count); System.out.println(countInner); } This one won't compile. It will throw an error, saying variable not initialized. Let's deceive the complier. private Integer count; private int countInner; public void demo(String[] args) { System.out.println(count); System.out.println(countInner); } Yey!! it got complied. What will be the output of this? Can you guess?. null 0 Why the compiler let us compile successfully this time?. When that is a local variable compiler will know for sure that is not yet assigned to any value. we can assign values outside of this method, so there is a possibility of non-empty value inside this function. Thus java compiler won't prevent us from compiling. What if we didn't assign any values to these variables and use these variables like this, private static Integer count; public static void main(String[] args) { System.out.println(count.toString()); } Oops! Here comes the NullPointerException. Now you have some idea about null problem right?. Let's move on to Empty, There is a slight difference between no password was given and an empty password. Take a look at this snippet. function validPassword(password) { return password != null; } function validPassword2(password) { return password != undefined; } function validPassword3(password) { return password !== undefined; } console.log(validPassword('')); console.log(validPassword(null)); console.log(validPassword2('')); console.log(validPassword2(null)); console.log(validPassword3('')); console.log(validPassword3(null)); the output of this will be, true false true false true true To get a better understanding of this, take a look at this javascript truth table. What if your password validation just checks against null or undefined, you will be allowing the user to have an empty password. For, == comparison For, === comparison weird isn't it?. (Javascript is always weird ;-)) Some programming languages provide safety against some of these issues. In Java, @NonNull public static String longToIp(@NonNull Long ip) { return ((ip >> 24) & 0xFF) + "." + ((ip >> 16) & 0xFF) + "." + ((ip >> 8) & 0xFF) + "." + (ip & 0xFF); } But you need to configure tools like FindBugs to get errors and warnings, Can't be better than this?. In some languages null safety is tied into the language itself. thus provides better support for null safety. In Kotlin, var a: String = "abc" a = null // compilation error More details about kotlin null safety, In typescript (Javascript with types), let name: string; name = null; // compilation error More details about typescript null safety Did you notice something?. No one provides safety against empty values. Because of this, a lot of frameworks and libraries have utils to check empty value. While writing code, keep asking one thing, "Should I handle empty state?". Handling null and empty state everywhere is also bad. As a rule of thumb, Don't use optional values in the place of mandatory values. If we strictly need a value, throw an error early on, if it null. Please keep this in mind 'there might be a stupid person like me who never reads the instructions'. "Should a function return null or empty?" is a whole new argument, let's save that for another day. What Are the Most Important CS Principles to Learn as a New Dev from a Non-Traditional Background? Let's Build a CAPTCHA Generator with Node.js Andrew Healey - Fractal intro, Fade bullets, Spectral menu | Module Monday 63 Tyler Warnock - I'm building a copy of the iOS Home Screen using React Spring and Styled Components. Here's what I've got so far. #2 Lucas Chen - Just wanted to clarify that the different Java compiler behaviors over instance variable vs local variable is not a result of inference, but as mandated by Java Spec. Yes, that's why wrapped types (like Integer, Boolean) can be null. It is easy to make a mistake. I just wanted to show one way to get null pointer exception. There is lot of other cases also.
https://dev.to/gokuldroid/null-vs-empty-42aj
CC-MAIN-2019-47
en
refinedweb
This file defines special dependency analysis routines used in Objective C ARC Optimizations. More... #include "DependencyAnalysis.h" #include "ObjCARC.h" #include "ProvenanceAnalysis.h" #include "llvm/IR/CFG.h" Go to the source code of this file. This file defines special dependency analysis routines used in Objective C ARC Optimizations. DependencyAnalysis.cpp. Definition at line 30 of file DependencyAnalysis.cpp.
https://llvm.org/doxygen/DependencyAnalysis_8cpp.html
CC-MAIN-2019-47
en
refinedweb
Make your decorators glossy! Project description Installation pip install glossy Start Decorating import glossy import time @glossy.decorator def timer(func, *args, **kwargs): """ Timer Place this decorator on functions to see how long they take to execute. """ start = time.time() result = func(*args, **kwargs) secs = time.time() - start name = func.__name__ print(f"Function {name} took {secs} seconds") return result Features Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/glossy/
CC-MAIN-2019-47
en
refinedweb
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hello guys, Is Possible to know Which time to complete a sound (music) an .mp3 or .wav of a file in Processing, using the Minim library? For example, I need at the end of a particular sound, perform a certain action, so I need to know when to finished. This is the code I used as an example, although I'm using is in another project. import ddf.minim.*; Minim minim; AudioPlayer player[] = new AudioPlayer[3]; String filenames[] = new String[] {"groove.mp3", "jingle.mp3", "Kalimba.mp3"}; void setup(){ minim = new Minim(this); for(int i = 0; i < 3; i++){ player[i] = minim.loadFile(filenames[i], 2048); } } void draw(){ // } void keyPressed(){ int som; if (key == 'A' || key == 'a') som = 0; else if (key == 'B' || key == 'b') som = 1; else som = 2; playSom(som); } void playSom(int opcSom){ for (int i = 0; i < 3; i++){ if (player[i].isPlaying()) player[i].pause(); } player[opcSom].rewind(); player[opcSom].play(); println("Duration: "); //??? } void stop(){ for(int i = 0; i < 3; i++){ player[i].close(); } minim.stop(); super.stop(); } Thanks for attention. Answers Hello GoToLoop, Thanks for the feedback ... I will do some testing here ... thank you Hello, Sorry to still be bothering with it ... looks like it has a problem of "timing" between isPlaying () function, position () and length () the minim library. With a file regular work at the end of the sound, I call another normal screen ... with another file did not work, from what I saw, the position did not come reached or exceeded the value of length (). I took a look at the documentation, but did not see anything related to time. (As when working with films, for example, validation of the movie works perfect finishes). Below is the print which is part of the Code, and values while running the sound. Tell if there is any way to control the maximum time and passed, and not the file size? Sorry English, the translator can see that is not cool ... thanks! Hello, I managed to solve the problem by comparing the state are playing or not, and a variable to know when the sound began ... When the sound is not playing, and the variable is true, it means it will be Feite some other action, and change the variable to false, only returning true when I call the sound again. It was only this validation: Thank you and to the next bug ( laughs ...)
https://forum.processing.org/two/discussion/13656/file-length-mp3-or-wav-with-minim-and-processing
CC-MAIN-2019-47
en
refinedweb
Provided by: libsvn-hooks-perl_1.31-1_all NAME SVN::Hooks - Framework for implementing Subversion hooks VERSION version 1.31 SYNOPSIS A single script can implement several hooks: #!/usr/bin/perl use SVN::Hooks; START_COMMIT { my ($repo_path, $username, $capabilities, $txt_name) = @_; # ... }; PRE_COMMIT { my ($svnlook) = @_; # ... }; run_hook($0, @ARGV); Or you can use already implemented hooks via plugins: #!/usr/bin/perl use SVN::Hooks; use SVN::Hooks::DenyFilenames; use SVN::Hooks::DenyChanges; use SVN::Hooks::CheckProperty; ... run_hook($0, @ARGV); INTRODUCTION In order to really understand what this is all about you need to understand Subversion <> and its hooks. You can read everything about this in the svnbook, a.k.a. Version Control with Subversion, at <>. Subversion is a version control system, and as such it is used to keep historical revisions of files and directories. Each revision maintains information about all the changes introduced since the previous one: date, author, log message, files changed, files renamed, etc. Subversion uses a client/server model. The server maintains the repository, which is the database containing all the historical information we talked about above. Users use a Subversion client tool to query and change the repository but also to maintain one or more working areas. A working area is a directory in the user machine containing a copy of a particular revision of the repository. The user can use the client tool to make all sorts of changes in his working area and to "commit" them all in an atomic operation that bumps the repository to a new revision. A hook is a specifically named program that is called by the Subversion server during the execution of some operations. There are exactly nine hooks which must reside under the "hooks" directory in the repository. When you create a new repository, you get nine template files in this directory, all of them having the ".tmpl" suffix and helpful instructions inside explaining how to convert them into working hooks. When Subversion is performing a commit operation on behalf of a client, for example, it calls the "start-commit" hook, then the "pre-commit" hook, and then the "post-commit" hook. The first two can gather all sorts of information about the specific commit transaction being performed and decide to reject it in case it doesn't comply to specified policies. The "post-commit" can be used to log or alert interested parties about the commit just done. IMPORTANT NOTE from the svnbook: ." There are several useful hook scripts available elsewhere <>, mainly for those three associated with the commit operation. However, when you try to combine the functionality of two or more of those scripts in a single hook you normally end up facing two problems. Complexity In order to integrate the funcionality of more than one script you have to write a driver script that's called by Subversion startup cost because they are, well, scripts and not binaries. And second, because as each script is called in turn they have no memory of the scripts called before and have to gather the information about the transaction again and again, normally by calling the "svnlook" command, which spawns yet another process. SVN::Hooks is a framework for implementing Subversion hooks that tries to solve these problems. Instead of having separate scripts implementing different functionality you have a single script implementing all the funcionality you need either directly or using some of the existing plugins, which are implemented by Perl modules in the SVN::Hooks:: namespace. This single script can be used to implement all nine standard hooks, because each hook knows when to perform based on the context in which the script was called. USAGE In the Subversion server, go to the "hooks" directory under the directory where the repository was created. You should see there the nine hook templates. Create a script there using the SVN::Hooks module. $ cd /path/to/repo/hooks $ cat >svn-hooks.pl <<END_OF_SCRIPT #!/usr/bin/perl use SVN::Hooks; run_hook($0, @ARGV); END_OF_SCRIPT $ chmod +x svn-hooks.pl This script will serve for any hook. Create symbolic links pointing to it for each hook you are interested in. (You may create symbolic links for all nine hooks, but this will make Subversion call the script for all hooked operations, even for those that you may not be interested in. Nothing wrong will happen, but the server will be doing extra work for nothing.) $ ln -s svn-hooks.pl start-commit $ ln -s svn-hooks.pl pre-commit $ ln -s svn-hooks.pl post-commit $ ln -s svn-hooks.pl pre-revprop-change As is the script won't do anything. You have to implement some hooks or use some of the existing ones implemented as plugins. Either way, the script should end with a call to "run_hooks" passing to it the name with which it wass called ($0) and all the arguments it received (@ARGV). Implementing Hooks Implement hooks using one of the nine hook directives below. Each one of them get a single block (anonymous function) as argument. The block will be called by "run_hook" with proper arguments, as indicated below. These arguments are the ones gotten from @ARGV, with the exception of the ones identified by "SVN::Look". These are SVN::Look objects which can be used to grok detailed information about the repository and the current transaction. (Please, refer to the SVN::Look documentation to know how to use it.) · POST_COMMIT(SVN::Look) · POST_LOCK(repos-path, username) · POST_REVPROP_CHANGE(SVN::Look, username, property-name, action) · POST_UNLOCK(repos-path, username) · PRE_COMMIT(SVN::Look) · PRE_LOCK(repos-path, path, username, comment, steal-lock-flag) · PRE_REVPROP_CHANGE(SVN::Look, username, property-name, action) · PRE_UNLOCK(repos-path, path, username, lock-token, break-unlock-flag) · START_COMMIT(repos-path, username, capabilities, txt-name) This is an example of a script implementing two hooks: #!/usr/bin/perl use SVN::Hooks; # ... START_COMMIT { my ($repos_path, $username, $capabilities, $txt_name) = @_; exists $committers{$username} or die "User '$username' is not allowed to commit.\n"; $capabilities =~ /mergeinfo/ or die "Your Subversion client does not support mergeinfo capability.\n"; }; PRE_COMMIT { my ($svnlook) = @_; foreach my $added ($svnlook->added()) { $added !~ /\.(exe|o|jar|zip)$/ or die "Please, don't commit binary files such as '$added'.\n"; } }; run_hook($0, @ARGV); Note that the hook directives resemble function definitions but they're not. They are function calls, and as such must end with a semi-colon. Most of the "start-commit" and "pre-*" hooks are used to check some condition. If the condition holds, they must simply end without returning anything. Otherwise, they must "die" with a suitable error message. Also note that each hook directive can be called more than once if you need to implement more than one specific hook. The hooks will run in the order they were defined. Using Plugins There are several hooks already implemented as plugin modules under the namespace "SVN::Hooks::", which you can use. The main ones are described succinctly below. Please, see their own documentation for more details. SVN::Hooks::AllowPropChange Allow only specified users make changes in revision properties. SVN::Hooks::CheckCapability Check if the Subversion client implements the required capabilities. SVN::Hooks::CheckJira Integrate Subversion with the JIRA <> ticketing system. SVN::Hooks::CheckLog Check if the log message in a commit conforms to a Regexp. SVN::Hooks::CheckMimeTypes Check if the files added to the repository have the "svn:mime-type" property set. Moreover, for text files, check if the properties "svn:eol-style" and "svn:keywords" are also set. SVN::Hooks::CheckProperty Check for specific properties for specific kinds of files. SVN::Hooks::CheckStructure Check if the files and directories being added to the repository conform to a specific structure. SVN::Hooks::DenyChanges Deny the addition, modification, or deletion of specific files and directories in the repository. Usually used to deny modifications in the "tags" directory. SVN::Hooks::DenyFilenames Deny the addition of files which file names doesn't comply with a Regexp. Usually used to disallow some characteres in the filenames. SVN::Hooks::Notify Sends notification emails after successful commits. SVN::Hooks::UpdateConfFile Allows you to maintain Subversion configuration files versioned in the same repository where they are used. Usually used to maintain the configuration file for the hooks and the repository access control file. This is an example of a script using some plugins: #!/usr/bin/perl use SVN::Hooks; use SVN::Hooks::CheckProperty; use SVN::Hooks::DenyChanges; use SVN::Hooks::DenyFilenames; # Accept only letters, digits, underlines, periods, and hifens DENY_FILENAMES(qr/[^-\/\.\w]/i); # Disallow modifications in the tags directory DENY_UPDATE(qr:^tags:); # OpenOffice.org documents need locks CHECK_PROPERTY(qr/\.(?:od[bcfgimpst]|ot[ghpst])$/i => 'svn:needs-lock'); run_hook($0, @ARGV); Those directives are implemented and exported by the hooks. Note that using hooks you don't need to be explicit about which one of the nine hooks will be triggered by the directives. This is on purpose, because some plugins can trigger more than one hook. The plugin documentation should tell you which hooks can be triggered so that you know which symbolic links you need to create in the hooks repository directory. Configuration file Before calling the hooks, the function "run_hook" evaluates a file called svn-hooks.conf under the conf directory in the repository, if it exists. Hence, you can choose to put all the directives in this file and not in the script under the hooks directory. The advantage of this is that you can then manage the configuration file with the "SVN::Hooks::UpdateConfFile" and have it versioned under the same repository that it controls. One way to do this is to use this hook script: #!/usr/bin/perl use SVN::Hooks; use SVN::Hooks::UpdateConfFile; use ... UPDATE_CONF_FILE( 'conf/svn-hooks.conf' => 'svn-hooks.conf', validator => [qw(/usr/bin/perl -c)], rotate => 2, ); run_hook($0, @ARGV); Use this hook script and create a directory called conf at the root of the repository (besides the common trunk, branches, and tags directories). Add the svn-hooks.conf file under the conf directory. Then, whenever you commit a new version of the file, the pre- commit hook will validate it sintactically ("/usr/bin/perl -c") and copy its new version to the conf/svn-hooks.conf file in the repository. (Read the SVN::Hooks::UpdateConfFile documentation to understand it in details.) Being a Perl script, it's possible to get fancy with the configuration file, using variables, functions, and whatever. But for most purposes it consists just in a series of configuration directives. Don't forget to end it with the "1;" statement, though, because it's evaluated with a "do" statement and needs to end with a true expression. Please, see the plugins documentation to know about the directives. PLUGIN DEVELOPER TUTORIAL Yet to do. EXPORT run_hook This is responsible to invoke the right plugins depending on the context in which it was called. Its first argument must be the name of the hook that was called. Usually you just pass $0 to it, since it knows to extract the basename of the parameter. Its second argument must be the path to the directory where the repository was created. The remaining arguments depend on the hook for which it's being called, like this: · start-commit repo-path user capabilities txt-name · pre-commit repo-path txn · post-commit repo-path rev · pre-lock repo-path path user · post-lock repo-path user · pre-unlock repo-path path user · post-unlock repo-path user · pre-revprop-change repo-path rev user propname action · post-revprop-change repo-path rev user propname action But as these are exactly the arguments Subversion passes when it calls the hooks, you usually call "run_hook" like this: run_hook($0, @ARGV); REPOSITORY <> AUTHOR.
http://manpages.ubuntu.com/manpages/xenial/man3/SVN::Hooks.3pm.html
CC-MAIN-2019-47
en
refinedweb
QProgressBar causing bad performance in QT5? I'm developping a program which parses a file (365000 lines) in which I try to match some keywords after reading each line. This computation along with the update of my QProgressBar are made in another thread using QThread. Everything works fine except for the performance especially when I update the QProgressBar. I use a timer for the parsing and the result is just STUNNING. When I emit a signal to update the QProgressBar the program takes around 45 seconds but when I do not emit the signal for the QProgressBar update then the program takes around 0.40 sec =/ from PyQt5 import QtCore, QtWidgets, QtGui import sys import time liste = ["failed", "exception"] class ParseFileAsync(QtCore.QThread): match = QtCore.pyqtSignal(str) PBupdate = QtCore.pyqtSignal(int) PBMax = QtCore.pyqtSignal(int) def run(self): cpt = 0 with open("shutdown_issue_1009.log", "r") as fichier: fileLines = fichier.readlines() lineNumber = len(fileLines) self.PBMax.emit(lineNumber) t0 = time.time() for line in fileLines: cpt+=1 self.PBupdate.emit(cpt) for element in liste: if element in line: self.match.emit(line) finalTime = time.time() - t0 print("over :", finalTime) class Ui_MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() self.setupUi(self) self.thread = ParseFileAsync() self.thread.match.connect(self.printError) self.thread.PBupdate.connect(self.updateProgressBar) self.thread.PBMax.connect(self.setMaximumProgressBar) self.pushButton_GO.clicked.connect(self.startThread) def printError(self, line): self.textEdit.append(line) def updateProgressBar(self, value): self.progressBar.setValue(value) def setMaximumProgressBar(self, value): self.progressBar.setMaximum(value) def startThread(self): self.thread.start() Console output: over : 44.49321101765038 //QProgressBar updated over : 0.3695987798147516 //QProgressBar not updated (#self.PBupdate.emit(cpt)) Am I missing something or is that expected ? - mrjj Lifetime Qt Champion last edited by Hi I dont know python, but if in a c++ application i would try the same with no slot connected to see if it was sending the signal very fast that was the issue. So try to emit but dont respond to it. If still fast i would look at the slot. Using Progressbar in c++, i didn't notice such huge difference and painting the bar etc should not be so expensive. But if it takes 0.3 sec. do u really need a progress bar anyway? I did the test and appearantly it's the painting which is very expensive... with no slot connected, the program takes 1,5 seconds to complete the task. There is a very similar issue reported at. But generally ProgressBars are consuming especially when we update it that often (>365000 times) while there is no significant change at each iteration. So I simply update the QProgressBar less often and the results are good (~1s) and the progression still smooth. PSB : class ParseFileAsync(QtCore.QThread): match = QtCore.pyqtSignal(str) PBupdate = QtCore.pyqtSignal(int) PBMax = QtCore.pyqtSignal(int) def run(self): with open("test_long.log", "r") as fichier: fileLines = fichier.readlines() self.lineNumber = len(fileLines) self.PBMax.emit(self.lineNumber) if (self.lineNumber < 30): self.parseFile(fileLines, False) else: self.parseFile(fileLines, True) def parseFile(self, fileLines, isBig): cpt = 0 if(isBig): for line in fileLines: cpt+=1 if(cpt % (int(self.lineNumber/30)) == 0): self.PBupdate.emit(cpt) for element in liste: if element in line: self.match.emit(line) self.PBupdate.emit(self.lineNumber) #To avoid QProgressBar stopping at 99% else: for line in fileLines: cpt+=1 self.PBupdate.emit(cpt) for element in liste: if element in line: self.match.emit(line)
https://forum.qt.io/topic/66351/qprogressbar-causing-bad-performance-in-qt5/1
CC-MAIN-2019-47
en
refinedweb
The AWS SDK for Python provides a pair of methods to upload a file to an S3 bucket. The upload_file method accepts a file name, a bucket name, and an object name. The method handles large files by splitting them into smaller chunks and uploading each chunk in parallel. import logging import boto3 from botocore.exceptions import ClientError def upload_file(file_name, bucket, object_name=None): """Upload a file to an S3 bucket :param file_name: File to upload :param bucket: Bucket to upload to :param object_name: S3 object name. If not specified then file_name is used :return: True if file was uploaded, else False """ # If S3 object_name was not specified, use file_name if object_name is None: object_name = file_name # Upload the file s3_client = boto3.client('s3') try: response = s3_client.upload_file(file_name, bucket, object_name) except ClientError as e: logging.error(e) return False return True The upload_fileobj method accepts a readable file-like object. The file object must be opened in binary mode, not text mode. s3 = boto3.client('s3') with open("FILE_NAME", "rb") as f: s3.upload_fileobj(f, "BUCKET_NAME", "OBJECT_NAME") The upload_file and upload_fileobj methods are provided by the S3 Client, Bucket, and Object classes. The method functionality provided by each class is identical. No benefits are gained by calling one class's method over another's. Use whichever class is most convenient. Both upload_file and upload_fileobj accept an optional ExtraArgs parameter that can be used for various purposes. The list of valid ExtraArgs settings is specified in the ALLOWED_UPLOAD_ARGS attribute of the S3Transfer object at boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS. The following ExtraArgs setting specifies metadata to attach to the S3 object. s3.upload_file( 'FILE_NAME', 'BUCKET_NAME', 'OBJECT_NAME', ExtraArgs={'Metadata': {'mykey': 'myvalue'}} ) The following ExtraArgs setting assigns the canned ACL (access control list) value 'public-read' to the S3 object. s3.upload_file( 'FILE_NAME', 'BUCKET_NAME', 'OBJECT_NAME', ExtraArgs={'ACL': 'public-read'} ) The ExtraArgs parameter can also be used to set custom or multiple ACLs. s3.upload_file( 'FILE_NAME', 'BUCKET_NAME', 'OBJECT_NAME', ExtraArgs={ 'GrantRead': 'uri=""', 'GrantFullControl': 'id="01234567890abcdefg"', } ) Both upload_file and upload_fileobj accept an optional Callback parameter. The parameter references a class that the Python SDK invokes intermittently during the transfer operation. Invoking a Python class executes the class's __call__ method. For each invocation, the class is passed the number of bytes transferred up to that point. This information can be used to implement a progress monitor. The following Callback setting instructs the Python SDK to create an instance of the ProgressPercentage class. During the upload, the instance's __call__ method will be invoked intermittently. s3.upload_file( 'FILE_NAME', 'BUCKET_NAME', 'OBJECT_NAME', Callback=ProgressPercentage('FILE_NAME') ) An example implementation of the ProcessPercentage class is shown below. import os import sys import threading class ProgressPercentage(object): def __init__(self, filename): self._filename = filename self._size = float(os.path.getsize(filename)) self._seen_so_far = 0 self._lock = threading.Lock() def __call__(self, bytes_amount): # To simplify, assume this is hooked up to a single filename with self._lock: self._seen_so_far += bytes_amount percentage = (self._seen_so_far / self._size) * 100 sys.stdout.write( "\r%s %s / %s (%.2f%%)" % ( self._filename, self._seen_so_far, self._size, percentage)) sys.stdout.flush()
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html
CC-MAIN-2020-50
en
refinedweb
$ cnpm install simplewebrtc The open-source version of SimpleWebRTC has been deprecated. This repository will remain as-is but is no longer actively maintained. Read more about the "new" SimpleWebRTC (which is an entirely different thing) on Want to see it in action? Check out the demo: Want to run it locally? npm install && npm run test-page <!DOCTYPE html> <html> <head> <script src=""></script> <style> #remoteVideos video { height: 150px; } #localVideo { height: 150px; } </style> </head> <body> <video id="localVideo"></video> <div id="remoteVideos"></div> </body> </html> npm install --save simplewebrtc # for yarn users yarn add simplewebrtc After that simply import simplewebrtc into your project import SimpleWebRTC from 'simplewebrtc'; var webrtc = new SimpleWebRTC({ // the id/element dom element that will hold "our" video localVideoEl: 'localVideo', // the id/element dom element that will hold remote videos remoteVideosEl: 'remoteVideos', // immediately ask for camera access autoRequestMedia: true }); // we have to wait until it's ready webrtc.on('readyToCall', function () { // you can name it anything webrtc.join completely different approach. Sometimes you need to do more advanced stuff. See for some examples. Join the Gitter channel: new SimpleWebRTC(options) object options- options object provided to constructor consisting of: string url- required url for signaling server. Defaults to signaling server URL which can be used for development. You must use your own signaling server for production. object socketio- optional object to be passed as options to the signaling server connection. Connection connection- optional connection object for signaling. See Connectionbelow. Defaults to a new SocketIoConnection bool debug- optional flag to set the instance to debug mode [string|DomElement] localVideoEl- ID or Element to contain the local video element [string|DomElement] remoteVideosEl- ID or Element to contain the remote video elements bool autoRequestMedia- optional(=false) option to automatically request user media. Use trueto request automatically, or falseto request media later with startLocalVideo bool enableDataChannelsoptional(=true) option to enable/disable data channels (used for volume levels or direct messaging) bool autoRemoveVideos- optional(=true) option to automatically remove video elements when streams are stopped. bool adjustPeerVolume- optional(=false) option to reduce peer volume when the local participant is speaking number peerVolumeWhenSpeaking- optional(=.0.25) value used in conjunction with adjustPeerVolume. Uses values between 0 and 1. object media- media options to be passed to getUserMedia. Defaults to { video: true, audio: true }. Valid configurations described on MDN with official spec at w3c. object receiveMedia- optional RTCPeerConnection options. Defaults to { offerToReceiveAudio: 1, offerToReceiveVideo: 1 }. object localVideo- optional options for attaching the local video stream to the page. Defaults to { autoplay: true, // automatically play the video stream on the page mirror: true, // flip the local video to mirror mode (for UX) muted: true // mute local video stream to prevent echo } object logger- optional alternate logger for the instance; any object that implements log, warn, and errormethods. object peerConnectionConfig- optional options to specify own your own STUN/TURN servers. By default these options are overridden when the signaling server specifies the STUN/TURN server configuration. Example on how to specify the peerConnectionConfig: { "iceServers": [{ "url": "stun3.l.google.com:19302" }, { "url": "turn:your.turn.servers.here", "username": "your.turn.server.username", "credential": "your.turn.server.password" } ], iceTransports: 'relay' } capabilities - the webrtcSupport object that describes browser capabilities, for convenience config - the configuration options extended from options passed to the constructor connection - the socket (or alternate) signaling connection webrtc - the underlying WebRTC session manager To set up event listeners, use the SimpleWebRTC instance created with the constructor. Example: var webrtc = new SimpleWebRTC(options); webrtc.on('connectionReady', function (sessionId) { // ... }) 'connectionReady', sessionId - emitted when the signaling connection emits the connect event, with the unique id for the session. 'createdPeer', peer - emitted three times: when joining a room with existing peers, once for each peer when a new peer joins a joined room when sharing screen, once for each peer peer - the object representing the peer and underlying peer connection 'channelMessage', peer, channelLabel, {messageType, payload} - emitted when a broadcast message to all peers is received via dataChannel by using the method sendDirectlyToAll(). 'stunservers', [...args] - emitted when the signaling connection emits the same event 'turnservers', [...args] - emitted when the signaling connection emits the same event 'localScreenAdded', el - emitted after triggering the start of screen sharing elthe element that contains the local screen stream 'joinedRoom', roomName - emitted after successfully joining a room with the name roomName 'leftRoom', roomName - emitted after successfully leaving the current room, ending all peers, and stopping the local screen stream 'videoAdded', videoEl, peer - emitted when a peer stream is added videoEl- the video element associated with the stream that was added peer- the peer associated with the stream that was added 'videoRemoved', videoEl, peer - emitted when a peer stream is removed videoEl- the video element associated with the stream that was removed peer- the peer associated with the stream that was removed createRoom(name, callback) - emits the create event on the connection with name and (if provided) invokes callback on response joinRoom(name, callback) - joins the conference in room name. Callback is invoked with callback(err, roomDescription) where roomDescription is yielded by the connection on the join event. See signalmaster for more details. startLocalVideo() - starts the local media with the media options provided in the config passed to the constructor testReadiness() - tests that the connection is ready and that (if media is enabled) streams have started mute() - mutes the local audio stream for all peers (pauses sending audio) unmute() - unmutes local audio stream for all peers (resumes sending audio) pauseVideo() - pauses sending video to peers resumeVideo() - resumes sending video to all peers pause() - pauses sending audio and video to all peers resume() - resumes sending audio and video to all peers sendToAll(messageType, payload) - broadcasts a message to all peers in the room via the signaling channel (websocket) string messageType- the key for the type of message being sent object payload- an arbitrary value or object to send to peers sendDirectlyToAll(channelLabel, messageType, payload) - broadcasts a message to all peers in the room via a dataChannel string channelLabel- the label for the dataChannel to send on string messageType- the key for the type of message being sent object payload- an arbitrary value or object to send to peers getPeers(sessionId, type) - returns all peers by sessionId and/or type shareScreen(callback) - initiates screen capture request to browser, then adds the stream to the conference getLocalScreen() - returns the local screen stream stopScreenShare() - stops the screen share stream and removes it from the room stopLocalVideo() - stops all local media streams setVolumeForAll(volume) - used to set the volume level for all peers volume- the volume level, between 0 and 1 leaveRoom() - leaves the currently joined room and stops local screen share disconnect() - calls disconnect on the signaling connection and deletes it handlePeerStreamAdded(peer) - used internally to attach media stream to the DOM and perform other setup handlePeerStreamRemoved(peer) - used internally to remove the video container from the DOM and emit videoRemoved getDomId(peer) - used internally to get the DOM id associated with a peer getEl(idOrEl) - helper used internally to get an element where idOrEl is either an element, or an id of an element getLocalVideoContainer() - used internally to get the container that will hold the local video element getRemoteVideoContainer() - used internally to get the container that holds the remote video elements By default, SimpleWebRTC uses a socket.io connection to communicate with the signaling server. However, you can provide an alternate connection object to use. All that your alternate connection need provide are four methods: on(ev, fn)- A method to invoke fnwhen event evis triggered emit()- A method to send/emit arbitrary arguments on the connection getSessionId()- A method to get a unique session Id for the connection disconnect()- A method to disconnect the connection
https://developer.aliyun.com/mirror/npm/package/simplewebrtc
CC-MAIN-2020-50
en
refinedweb
An API for PPMP, the Production Performance Management Protocol Project description This Python package is part of the Eclipse Unide Project and provides an API for generating, parsing and validating PPMP payloads. PPMP, the “Production Performance Management Protocol” is a simple, JSON-based protocol for message payloads in (Industrial) Internet of Things applications defined by the Eclipse IoT Working Group. Implementations for other programming languages are available from the Unide web site. The focus of the Python implementation is ease of use for backend implementations, tools and for prototyping PPMP applications. Generating a simple payload and sending it over MQTT using Eclipse Paho is a matter of just a few lines: import unide import paho.mqtt.client as mqtt client = mqtt.Client() client.connect("localhost", 1883, 60) device = unide.Device("Device-001") measurement = device.measurement(temperature=36.7) client.publish(topic="sample", measurement) Installation The latest version is available in the Python Package Index (PyPI) and can be installed using: pip install unide-python unide-python can be used with Python 2.7, 3.4, 3.5 and 3.6. Source code, including examples and tests, is available on GitHub: To install the package from source: git clone git@github.com:eclipse/unide.python.git cd unide.python python setup.py install Contributing This is a straightforward Python project, using setuptools and the standard setup.py mechanism. You can run the test suite using setup.py: python setup.py test There also is a top-level Makefile that builds a development environment and can run a couple of developer tasks. We aim for 100% test coverage and use tox to test against all supported Python releases. To run all tests against all supported Python versions, build the documentation locally and an installable wheel, you’ll require pyenv and a decent implementation of make. make all will create a virtualenv env in the project directory and install the necessary tools (see tools.txt). For bug reports, suggestions and questions, simply open an issue in the Github issue tracker. We welcome pull requests. Documentation Detailed documentation is available on Read the Docs:. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/unide-python/
CC-MAIN-2020-50
en
refinedweb
Namespace support Registered by Gustavo Narea on 2009-05-25 Add support for namespaces to avoid name collisions and allow for a better organization of the operands. Each operand will have one namespace, where the default is the global namespace. A namespace can be also an operand (e.g., "today == '2009-05-25'", "today:week_day == 'Monday'"). In the generic parser, the parts of a namespace are separated by colons (e.g., "namespace1: Blueprint information - Status: - Complete - Approver: - Gustavo Narea - Priority: - Essential - Drafter: - Gustavo Narea - Direction: - Approved - Assignee: - Gustavo Narea - Definition: - Approved - Implementation: Implemented - Started by - Gustavo Narea on 2009-05-31 - Completed by - Gustavo Narea on 2009-07-03 Related branches Related bugs Sprints Whiteboard Finished in r118. Dependency tree * Blueprints in grey have been implemented.
https://blueprints.launchpad.net/booleano/+spec/namespaces
CC-MAIN-2020-50
en
refinedweb
In machine learning (ML), if the situation when the model does not generalize well from the training data to unseen data is called overfitting. As you might know, it is one of the trickiest obstacles in applied machine learning. The first step in tackling this problem is to actually know that your model is overfitting. That is where proper cross-validation comes in. After identifying the problem you can prevent it from happening by applying regularization or training with more data. Still, sometimes you might not have additional data to add to your initial dataset. Acquiring and labeling additional data points may also be the wrong path. Of course, in many cases, it will deliver better results, but in terms of work, it is time-consuming and expensive a lot of the time. That is where Data Augmentation (DA) comes in. In this article we will cover: - What is Data Augmentation – definition, the purpose of use, and techniques - Built-in augmentation methods in DL frameworks – TensorFlow, Keras, PyTorch, MxNet - Image DA libraries – Augmentor, Albumentations, ImgAug, AutoAugment, Transforms - Speed comparison of these libraries - Best practices, tips, and tricks What is Data Augmentation Data Augmentation is a technique that can be used to artificially expand the size of a training set by creating modified data from the existing one. It is a good practice to use DA if you want to prevent overfitting, or the initial dataset is too small to train on, or even if you want to squeeze better performance from your model. Let’s make this clear, Data Augmentation is not only used to prevent overfitting. In general, having a large dataset is crucial for the performance of both ML and Deep Learning (DL) models. However, we can improve the performance of the model by augmenting the data we already have. It means that Data Augmentation is also good for enhancing the model’s performance. In general, DA is frequently used when building a DL model. That is why throughout this article we will mostly talk about performing Data Augmentation with various DL frameworks. Still, you should keep in mind that you can augment the data for the ML problems as well. You can augment: - Audio - Text - Images - Any other types of data We will focus on image augmentations as those are the most popular ones. Nevertheless, augmenting other types of data is as efficient and easy. That is why it’s good to remember some common techniques which can be performed to augment the data. Data Augmentation techniques We can apply various changes to the initial data. For example, for images we can use: - Geometric transformations – you can randomly flip, crop, rotate or translate images, and that is just the tip of the iceberg - Color space transformations – change RGB color channels, intensify any color - Kernel filters – sharpen or blur an image - Random Erasing – delete a part of the initial image - Mixing images – basically, mix images with one another. Might be counterintuitive but it works For text there are: - Word/sentence shuffling - Word replacement – replace words with synonyms - Syntax-tree manipulation – paraphrase the sentence to be grammatically correct using the same words - Other described in the article about Data Augmentation in NLP For audio augmentation you can use: - Noise injection - Shifting - Changing the speed of the tape - And many more Moreover, the greatest advantage of the augmentation techniques is that you may use all of them at once. Thus, you may get plenty of unique samples of data from the initial one. Data Augmentation in Deep Learning As mentioned above in Deep Learning, Data Augmentation is a common practice. Therefore, every DL framework has its own augmentation methods or even a whole library. For example, let’s see how to apply image augmentations using built-in methods in TensorFlow (TF) and Keras, PyTorch, and MxNet. Data Augmentation in TensorFlow and Keras To augment images when using TensorFlow or Keras as our DL framework we can: - Write our own augmentation pipelines or layers using tf.image. - Use Keras preprocessing layers - Use ImageDataGenerator Tf.image Let’s take a closer look on the first technique and define a function that will visualize an image and then apply the flip to that image using tf.image. You may see the code and the result below. def visualize(original, augmented): fig = plt.figure() plt.subplot(1,2,1) plt.title('Original image') plt.imshow(original) plt.subplot(1,2,2) plt.title('Augmented image') plt.imshow(augmented) flipped = tf.image.flip_left_right(image) visualize(image, flipped) For finer control you can write your own augmentation pipeline. In most cases it is useful to apply augmentations on a whole dataset, not a single image. You can implement it as follows. import tensorflow_datasets as tfds def augment(image, label): image = tf.cast(image, tf.float32) image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE]) image = (image / 255.0) image = tf.image.random_crop(image, size=[IMG_SIZE, IMG_SIZE, 3]) image = tf.image.random_brightness(image, max_delta=0.5) return image, label (train_ds, val_ds, test_ds), metadata = tfds.load( 'tf_flowers', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True,) train_ds = train_ds .shuffle(1000) .map(augment, num_parallel_calls=tf.data.experimental.AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) Of course, that is just the tip of the iceberg. TensorFlow API has plenty of augmentation techniques. If you want to read more on the topic please check the official documentation or other articles. Keras preprocessing As mentioned above, Keras has a variety of preprocessing layers that may be used for Data Augmentation. You can apply them as follows. data_augmentation = tf.keras.Sequential([ layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"), layers.experimental.preprocessing.RandomRotation(0.2)]) image = tf.expand_dims(image, 0) plt.figure(figsize=(10, 10)) for i in range(9): augmented_image = data_augmentation(image) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_image[0]) plt.axis("off") Keras ImageDataGenerator Also, you may use ImageDataGenerator (tf.keras.preprocessing.image.ImageDataGenerator) that generates batches of tensor images with real-time DA. datagen = ImageDataGenerator(rotation_range=90) datagen.fit(x_train) for X_batch, y_batch in datagen.flow(x_train, y_train, batch_size=9): for i in range(0, 9): pyplot.subplot(330 + 1 + i) pyplot.imshow(X_batch[i].reshape(img_rows, img_cols, 3)) pyplot.show() break See related articles: - Keras Loss Functions: Everything You Need To Know - Keras Metrics: Everything You Need To Know - Neptune-Keras Integration Data Augmentation in PyTorch and MxNet Transforms in Pytorch Transforms library is the augmentation part of the torchvision package that consists of popular datasets, model architectures, and common image transformations for Computer Vision tasks. To install Transforms you simply need to install torchvision: pip3 install torch torchvision Transforms library contains different image transformations that can be chained together using the Compose method. Functionally, Transforms has a variety of augmentation techniques implemented. You can combine them by using Compose method. Just check the official documentation and you will certainly find the augmentation for your task. Additionally, there is the torchvision.transforms.functional module. It has various functional transforms that give fine-grained control over the transformations. It might be really useful if you are building a more complex augmentation pipeline, for example, in the case of segmentation tasks. Besides that, Transforms doesn’t have a unique feature. It’s used mostly with PyTorch as it’s considered a built-in augmentation library. See related articles: Sample usage of PyTorch Transforms Let’s see how to apply augmentations using Transforms. You should keep in mind that Transforms works only with PIL images. That is why you should either read an image in PIL format or add the necessary transformation to your augmentation pipeline. from torchvision import transforms as tr from torchvision.transfroms import Compose pipeline = Compose( [tr.RandomRotation(degrees = 90), tr.RandomRotation(degrees = 270)]) augmented_image = pipeline(img = img) Sometimes you might want to write a custom Dataloader for the training. Let’s see how to apply augmentations via Transforms if you are doing so. from torchvision import transforms from torchvision.transforms import Compose as C def aug(p=0.5): return C([transforms.RandomHorizontalFlip()], p=p) class Dataloader(object): def __init__(self, train, csv, transform=None): ... def __getitem__(self, index): ... img = aug()(**{'image': img})['image'] return img, target def __len__(self): return len(self.image_list) trainset = Dataloader(train=True, csv='/path/to/file/', transform=aug) Transforms in MxNet Mxnet also has a built-in augmentation library called Transforms (mxnet.gluon.data.vision.transforms). It is pretty similar to PyTorch Transforms library. There is pretty much nothing to add. Check the Transforms section above if you want to find more on this topic. General usage is as follows. Sample usage of MxNet Transforms color_aug = transforms.RandomColorJitter( brightness=0.5, contrast=0.5, saturation=0.5, hue=0.5) apply(example_image, color_aug) Those are nice examples, but from my experience, the real power of Data Augmentation comes out when you are using custom libraries: - They have a wider set of transformation methods - They allow you to create custom augmentation - You can stack one transformation with another. That is why using custom DA libraries might be more effective than using built-in ones. Data Augmentation Libraries In this section, we will talk about the following libraries : - Augmentor - Albumentations - Imgaug - AutoAugment (DeepAugment) We will look at the installation, augmentation functions, augmenting process parallelization, custom augmentations, and provide a simple example. Remember that we will focus on image augmentation as it is most commonly used. Before we start I have a few general notes, about using custom augmentation libraries with different DL frameworks. In general, all libraries can be used with all frameworks if you perform augmentation before training the model. The point is that some libraries have pre-existing synergy with the specific framework, for example, Albumentations and Pytorch. It’s more convenient to use such pairs. Still, if you need specific functional or you like one library more than another you should either perform DA before starting to train a model or write a custom Dataloader and training process instead. The second major topic is using custom augmentations with different augmentation libraries. For example, you want to use your own CV2 image transformation with a specific augmentation from Albumentations library. Let’s make this clear, you can do that with any library, but it might be more complicated than you think. Some libraries have a guide in their official documentation of how to do it, but others do not. If there is no guide, you basically have two ways: - Apply augmentations separately, for example, use your transformation operation and then the pipeline. - Check Github repositories in case someone has already figured out how to integrate a custom augmentation to the pipeline correctly. Ok, with that out of the way, let’s dive in. Augmentor Moving on to the libraries, Augmentor is a Python package that aims to be both a data augmentation tool and a library of basic image pre-processing functions. It is pretty easy to install Augmentor via pip: pip install Augmentor If you want to build the package from the source, please, check the official documentation. In general, Augmentor consists of a number of classes for standard image transformation functions, such as Crop, Rotate, Flip, and many more. Augmentor allows the user to pick a probability parameter for every transformation operation. This parameter controls how often the operation is applied. Thus, Augmentor allows forming an augmenting pipeline that chains together a number of operations that are applied stochastically. This means that each time an image is passed through the pipeline, a completely different image is returned. Depending on the number of operations in the pipeline and the probability parameter, a very large amount of new image data can be created. Basically, that is data augmentation at its best. What can we do with images using Augmentor? Augmentor is more focused on geometric transformation though it has other augmentations too. The main features of Augmentor package are: - Perspective skewing – look at an image from a different angle - Elastic distortions – add distortions to an image - Rotating – simply, rotate an image - Shearing – tilt an image along with one of its sides - Cropping – crop an image - Mirroring – apply different types of flips Augmentor is a well-knit library. You can use it with various DL frameworks (TF, Keras, PyTorch, MxNet) because augmentations may be applied even before you set up a model. Moreover, Augmentor allows you to add custom augmentations. It might be a little tricky as it requires writing a new operation class, but you can do that. Unfortunately, Augmentor is neither extremely fast nor flexible functional wise. There are libraries that have more transformation functions available and can perform DA way faster and more effectively. That is why Augmentor is probably the least popular DA library. Sample usage of Augmentor Let’s check the simple usage of Augmentor: - We need to import it. - We create an empty augmenting pipeline. - Add some operations in there - Use sample method to get the augmented images. Please pay attention when using sample you need to specify the number of augmented images you want to get. import Augmentor p = Augmentor.Pipeline("/path/to/images") p.rotate(probability=0.7, max_left_rotation=10, max_right_rotation=10) p.zoom(probability=0.3, min_factor=1.1, max_factor=1.6) p.sample(10000) Albumentations Albumentations is a computer vision tool designed to perform fast and flexible image augmentations. It appears to have the largest set of transformation functions of all image augmentation libraries. Let’s install Albumentations via pip. If you want to do it somehow else, check the official documentation. pip install albumentations Albumentations provides a single and simple interface to work with different computer vision tasks such as classification, segmentation, object detection, pose estimation, and many more. The library is optimized for maximum speed and performance and has plenty of different image transformation operations. If we are talking about data augmentations, there is nothing Albumentations can not do. To tell the truth, Albumentations is the most stacked library as it does not focus on one specific area of image transformations. You can simply check the official documentation and you will find an operation that you need. Moreover, Albumentations has seamless integration with deep learning frameworks such as PyTorch and Keras. The library is a part of the PyTorch ecosystem but you can use it with TensorFlow as well. Thus, Albumentations is the most commonly used image augmentation library. On the other hand, Albumentations is not integrated with MxNet, which means if you are using MxNet as a DL framework you should write a custom Dataloader or use another augmentation library. It’s worth mentioning that Albumentations is an open-source library. You can easily check the original code if you want to. Sample usage of Albumentations Let’s see how to augment an image using Albumentations. You need to define the pipeline using the Compose method (or you can use a single augmentation), pass an image to it, and get the augmented one. import albumentations as A import cv2 def visualize(image): plt.figure(figsize=(10, 10)) plt.axis('off') plt.imshow(image) image = cv2.imread('/path/to/image') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) transform = A.Compose( [A.CLAHE(), A.RandomRotate90(), A.Transpose(), A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.50, rotate_limit=45, p=.75), A.Blur(blur_limit=3), A.OpticalDistortion(), A.GridDistortion(), A.HueSaturationValue()]) augmented_image = transform(image=image)['image'] visualize(augmented_image) ImgAug Now, after reading about Augmentor and Albumentations you might think all image augmentation libraries are pretty similar to one another. That is right. In many cases, the functionality of each library is interchangeable. Nevertheless, each one has its own key features. ImgAug is also a library for image augmentations. It is pretty similar to Augmentor and Albumentations functional wise, but the main feature stated in the official ImgAug documentation is the ability to execute augmentations on multiple CPU cores. If you want to do that you might want to check the following guide. As you may see, this’s pretty different from the Augmentors focus on geometric transformations or Albumentations attempting to cover all augmentations possible. Nevertheless, ImgAug’s key feature seems a bit weird as both Augmentor and Albumentations can be executed on multiple CPU cores as well. Anyway ImgAug supports a wide range of augmentation techniques just like Albumentations and implements sophisticated augmentation with fine-grained control. ImgAug can be easily installed via pip or conda. pip install imgaug Sample usage of ImgAug Like other image augmentation libraries, ImgAug is easy to use. To define an augmenting pipeline use the Sequential method and then simply stack different transformation operations like in other libraries. from imgaug import augmenters as iaa seq = iaa.Sequential([ iaa.Crop(px=(0, 16)), iaa.Fliplr(0.5), iaa.GaussianBlur(sigma=(0, 3.0))]) for batch_idx in range(1000): images = load_batch(batch_idx) images_aug = seq(images=images) Autoaugment On the other hand, Autoaugment is something more interesting. As you might know, using Machine Learning (ML) to improve ML design choices has already reached the space of DA. In 2018 Google has presented Autoaugment algorithm which is designed to search for the best augmentation policies. Autoaugment helped to improve state-of-the-art model performance on such datasets as CIFAR-10, CIFAR-100, ImageNet, and others. Still, AutoAugment is tricky to use, as it does not provide the controller module, which prevents users from running it for their own datasets. That is why using AutoAugment might be relevant only if it already has the augmentation strategies for the dataset we plan to train on and the task we are up to. Thereby let us take a closer look at DeepAugment that is a bit faster and more flexible alternative to AutoAugment. DeepAugment has no strong connection to AutoAugment besides the general idea and was developed by a group of enthusiasts. You can install it via pip: pip install deepaugment It’s important for us to know how to use DeepAugment to get the best augmentation strategies for our images. You may do it as follows or check out the official Github repository. Please, keep in mind that when you use optimize method you should specify the number of samples that will be used to find the best augmentation strategies. from deepaugment.deepaugment import DeepAugment deepaug = DeepAugment(my_images, my_labels) best_policies = deepaug.optimize(300) Overall, both AutoAugment and DeepAugment are not commonly used. Still, it might be quite useful to run them if you have no idea of what augmentation techniques will be the best for your data. You should only keep in mind that it will take plenty of time because multiple models will be trained. It’s worth mentioning that we have not covered all custom image augmentation libraries, but we have covered the major ones. Now you know what libraries are the most popular, what advantages and disadvantages they have, and how to use them. This knowledge will help you to find any additional information if you need so. Speed comparison As you may have already figured out, the augmentation process is a quite expensive time and computation wise. The time needed to perform DA depends on the number of data points we need to transform, on the overall augmenting pipeline difficulty, and even on the hardware that you use to augment your data. Let’s run some experiments to find out the fastest augmentation library. We will perform these experiments for Augmentor, Albumentations, ImgAug, and Transforms. We will use an image dataset from Kaggle that is made for flower recognition and contains over four thousand images. For our first experiment, we will create an augmenting pipeline that consists only of two operations. These will be Horizontal Flip with 0.4 probability and Vertical Flip with 0.8 probability. Let’s apply the pipeline to every image in the dataset and measure the time. As we have anticipated, Augmentor performs way slower than other libraries. Still, both Albumentations and Transforms show a good result as they are optimized to perform fast augmentations. For our second experiment, we will create a more complex pipeline with various transformations to see if Transforms and Albumentations stay at the top. We will stack more geometric transformations as a pipeline. Thus, we will be able to use all libraries as Augmentor, for example, doesn’t have much kernel filter operations. You may find the full pipeline in the notebook that I’ve prepared for you. Please, feel free to experiment and play with it. Once more Transforms and Albumentations are at the top. Moreover, if we check the CPU-usage graph that we got via Neptune we will find out that both Albumentations and Transforms use less than 60% of CPU resources. On the other hand, Augmentor and ImgAug use more than 80%. As you may have noticed, both Albumentations and Transforms are really fast. That is why they are commonly used in real life. Best practices, tips, and tricks It’s worth mentioning that despite DA being a powerful tool you should use it carefully. There are some general rules that you might want to follow when applying augmentations: - Choose proper augmentations for your task. Let’s imagine that you are trying to detect a face on an image. You choose Random Erasing as an augmentation technique and suddenly your model does not perform well even on training. That is because there is no face on an image as it was randomly erased by the augmentation technique. The same thing is with voice detection and applying noise injection to the tape as an augmentation. Keep these cases in mind and be logical when choosing DA techniques. - Do not use too many augmentations in one sequence. You may simply create a totally new observation that has nothing in common with your original training (or testing data) - Display augmented data (images and text) in the notebook and listen to the converted audio sample before starting training on them. It’s quite easy to make a mistake when forming an augmenting pipeline. That is why it’s always better to double-check the result. - Time the augmenting process and check the number of computational resources involved. As you may have seen above, Neptune can help you do that. Do not forget about the time library either. Also, it’s a great practice to check Kaggle notebooks before creating your own augmenting pipeline. There are plenty of ideas you may find there. Try to find a notebook for a similar task and check if the author applied the same augmentations as you’ve planned. Final thoughts In this article, we have figured out what data augmentation is, what DA techniques are there, and what libraries you can use to apply them. To my knowledge, the best publically available library is Albumentations. That is why if you are working with images and do not use MxNet or TensorFlow as your DL framework, you should probably use Albumentations for DA. Hopefully, with this information, you will have no problems setting up the DA for your next machine learning project. Resources - - - - - - - -
https://neptune.ai/blog/data-augmentation-in-python
CC-MAIN-2020-50
en
refinedweb
Spring Boot Actuator is a sub-project of Spring Boot. It adds several production grade services to your application with little effort on your part. In this guide, you will build an application and then see how to add these services. What You Will build This guide takes you through creating a “Hello, world” RESTful web service with Spring Boot Actuator. You will build a service that accepts the following HTTP GET request: $ curl It responds with the following JSON: {"id":1,"content":"Hello, World!"} There are also many features added to your application for managing the service in a production (or other) environment. The business functionality of the service you build is the same as in Building a RESTful Web Service. You need need not use that guide to take advantage of this one, although it might be interesting to compare the results.-actuator-service/initial Jump ahead to Create a Representation Class. When you finish, you can check your results against the code in gs-actuator-service Spring Boot Actuator>actuator-service</artifactId> <version>0.0.1-SNAPSHOT</version> <name>actuator-service</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> -actuator' implementation 'org.springframework.boot:spring-boot-starter-web' testImplementation('org.springframework.boot:spring-boot-starter-test') { exclude group: 'org.junit.vintage', module: 'junit-vintage-engine' } } test { useJUnitPlatform() } Run the Empty Service The Spring Initializr creates an empty application that you can use to get started. The following example (from src/main/java/com/example/actuatorservice/ActuatorServiceApplication in the initial directory) shows the class created by the Spring Initializr: package com.example.actuatorservice; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class ActuatorServiceApplication { public static void main(String[] args) { SpringApplication.run(ActuatorServiceApplication.class, args); } } The @SpringBootApplication annotation provides a load of defaults (like the embedded servlet container), depending on the contents of your classpath and other things. It also turns on Spring MVC’s @EnableWebMvc annotation, which activates web endpoints. There are no endpoints defined in this application, but there is enough to launch things and see some of Actuator’s features. The SpringApplication.run() command knows how to launch the web application. All you need to do is run the following command: $ ./gradlew clean build && java -jar build/libs/gs-actuator-service-0.1.0.jar You have yet to write any code, so what is happening? To see the answer, wait for the server to start, open another terminal, and try the following command (shown with its output): $ curl localhost:8080 {"timestamp":1384788106983,"error":"Not Found","status":404,"message":""} The output of the preceding command indicates that the server is running but that you have not defined any business endpoints yet. Instead of a default container-generated HTML error response, you see a generic JSON response from the Actuator /error endpoint. You can see in the console logs from the server startup which endpoints are provided out of the box. You can try a few of those endpoints, including the /health endpoint. The following example shows how to do so: $ curl localhost:8080/actuator/health {"status":"UP"} The status is UP, so the actuator service is running. See Spring Boot’s Actuator Project for more details. Create a Representation Class First, you need to give some thought to what your API will look like. You want to handle GET requests for /hello-world, optionally with a name query parameter. In response to such a request, you want to send back JSON, representing a greeting, that looks something like the following: { "id": 1, "content": "Hello, World!" } The id field is a unique identifier for the greeting, and content contains the textual representation of the greeting. To model the greeting representation, create a representation class. The following listing (from src/main/java/com/example/actuatorservice/Greeting.java) shows the Greeting class: package com.example.actuatorservice; public class Greeting { private final long id; private final String content; public Greeting(long id, String content) { this.id = id; this.content = content; } public long getId() { return id; } public String getContent() { return content; } } Now that you need to create the endpoint controller that will serve the representation class. Create a Resource Controller In Spring, REST endpoints are Spring MVC controllers. The following Spring MVC controller (from src/main/java/com/example/actuatorservice/HelloWorldController.java) handles a GET request for the /hello-world endpoint and returns the Greeting resource: package com.example.actuatorservice; returns the data to be written directly to the body of the response. The @ResponseBody annotation tells Spring MVC not to render a model into a view but, rather, to write the returned object into the response body. It does so by using one of Spring’s message converters. Because Jackson 2 is in the classpath, MappingJackson2HttpMessageConverter will handle the conversion of a Greeting object to JSON if the request’s Accept header specifies that JSON should be returned. Run the Application You can run the application from a custom main class or directly from one of the configuration classes. For this simple example, you can use the SpringApplication helper class. Note that this is the application class that the Spring Initializr created for you, and you need not even modify it for it to work for this simple application. The following listing (from src/main/java/com/example/actuatorservice/HelloWorldApplication.java) shows the application class: package com.example.actuatorservice; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class HelloWorldApplication { public static void main(String[] args) { SpringApplication.run(HelloWorldApplication.class, args); } } annotation also brings in a @ComponentScan annotation, which tells Spring to scan the com.example.actuatorservice package for those controllers (along with any other annotated component classes).: Once the service is running (because you ran spring-boot:run in a terminal), you can test it by running the following command in a separate terminal: $ curl localhost:8080/hello-world {"id":1,"content":"Hello, Stranger!"} Switch to a Different Server Port Spring Boot Actuator defaults to running on port 8080. By adding an application.properties file, you can override that setting. The following listing (from src/main/resources/application.properties)shows that file with the necessary changes: server.port: 9000 management.server.port: 9001 management.server.address: 127.0.0.1 Run the server again by running the following command in a terminal: $ ./gradlew clean build && java -jar build/libs/gs-actuator-service-0.1.0.jar The service now starts on port 9000. You can test that it is working on port 9000 by running the following commands in a terminal: $ curl localhost:8080/hello-world curl: (52) Empty reply from server $ curl localhost:9000/hello-world {"id":1,"content":"Hello, Stranger!"} $ curl localhost:9001/actuator/health {"status":"UP"} Test Your Application To check whether your application works, you should write unit and integration tests for your application. The test class in src/test/java/com/example/actuatorservice/HelloWorldApplicationTests.java ensures that Your controller is responsive. Your management endpoint is responsive. Note that the tests start the application on a random port. The following listing shows the test class: /* * Copyright 2012-2014.actuatorservice; import java.util.Map; import org.junit.jupiter.api.Test; static org.assertj.core.api.BDDAssertions.then; /** * Basic integration tests for service demo application. * * @author Dave Syer */ by using Spring, and you added some useful built-in services with Spring Boot Actuator. See Also The following guides may also be helpful: Want to write a new guide or contribute to an existing one? Check out our contribution guidelines.
https://spring.io/guides/gs/actuator-service/
CC-MAIN-2020-50
en
refinedweb
Created on 2013-04-22 16:41 by Nils.Bruin, last changed 2020-11-17 15:13 by iritkatriel. This issue is now closed. The following program is a little dependent on memory layout but will usually generate lots of Exception KeyError: (A(9996),) in <function remove at 0xa47050> ignored messages in Python 2.7. import weakref class A(object): def __init__(self,n): self.n=n def __repr__(self): return "A(%d)"%self.n def mess(n): D=weakref.WeakValueDictionary() L=[A(i) for i in range(n)] for i in range(n-1): j=(i+10)%n D[L[i]]=L[j] return D D=mess(10000) D.clear() The reason is that on D.clear() all entries are removed from D before actually deleting those entries. Once the entries are deleted one-by-one, sometimes the removal of a key will result in deallocation of that key, which may be a not-yet-deleted ex-value of the dictionary as well. The callback triggers on the weakref, but the dict itself was already emptied, so nothing is found. I've checked and on Python 3.2.3 this problem does not seem to occur. I haven't checked the Python source to see how Python 3 behaves differently and whether that behaviour would be easy to backport to fix this bug in 2.7. This is. The patch from there could easily be backported, I think. Have you tried if the fix at issue7105 solves the problem? I don't see the patch there introduce a `clear` method override for WeakValueDictionary or WeakKeyDictionary. The one for WeakSet still calls self.data.clear(), which for dictionaries would still result in the problem in this ticket (but not for WeakSet, because clearing a WeakSet shouldn't decref anything other than the weak references stored in the underlying set). I think the difference in behaviour between Py3 and Py2 is coming from: which first clears all values before removing any keys. For a WeakValueDictionary that means all the weakrefs are neutralized before the can be activated. I don't quite understand how Py3 manages to avoid problems for a WeakKeyDictionary, but apparently it does. One solution is to patch both WeakValueDictionary and WeakKeyDictionary with their own clear methods where we first store the strong links (to keys, resp. values) in a list, then clear the underlying dictionaries (this will now trigger the deletion of the weakrefs, so all callbacks are neutralized), and then delete the list. It does use more storage that way, but it gets rid of the ignored key errors. This is a different problem from issue7105, which deals with the (much more complicated) scenario of avoiding dictionary reshaping due to GC when iterators are still (potentially) active. This was fixed in Python 3 and Python 2 is past its EOL.
https://bugs.python.org/issue17816
CC-MAIN-2020-50
en
refinedweb
: def data = [ new Expando(id: 1, user: 'mrhaki', country: 'The Netherlands'), new Expando(id: 2, user: 'hubert', country: 'The Netherlands'), ] data.each { userData -> new File("${userData.id}.txt").withWriter('UTF-8') { fileWriter -> // Use writeTo method on GString to save // result in a file. "User $userData.user lives in $userData.country".writeTo(fileWriter) } } assert new File('1.txt').text == 'User mrhaki lives in The Netherlands' assert new File('2.txt').text == 'User hubert lives in The Netherlands' Code written with Groovy 2.2.2
https://blog.mrhaki.com/2014/04/groovy-goodness-gstring-as-writable.html
CC-MAIN-2020-50
en
refinedweb
This Python example shows you how to: Metrics are data about the performance of your systems. You can enable detailed monitoring of some resources, such as your Amazon CloudWatch instances, or your own application metrics. In this example, Python code is used to get and send CloudWatch metrics data. The code uses the AWS SDK for Python to get metrics from CloudWatch using these methods of the CloudWatch client class: For more information about CloudWatch metrics, see Using Amazon CloudWatch Metrics in the Amazon CloudWatch User Guide. All the example code for the Amazon Web Services (AWS) SDK for Python is available here on GitHub. To set up and run this example, you must first configure your AWS credentials, as described in Quickstart. List the metric alarm events uploaded to CloudWatch Logs. The example below shows how to: For more information about paginators see, Paginators import']) Publish metric data points to Amazon CloudWatch. Amazon CloudWatch associates the data points with the specified metric. If the specified metric does not exist, Amazon CloudWatch creates the metric. When Amazon CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics. The example below shows how to: import boto3 # Create CloudWatch client cloudwatch = boto3.client('cloudwatch') # Put custom metrics cloudwatch.put_metric_data( MetricData=[ { 'MetricName': 'PAGES_VISITED', 'Dimensions': [ { 'Name': 'UNIQUE_PAGES', 'Value': 'URLS' }, ], 'Unit': 'None', 'Value': 1.0 }, ], Namespace='SITE/TRAFFIC' )
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/cw-example-metrics.html
CC-MAIN-2020-50
en
refinedweb
You've been invited into the Kudos (beta program) private group. Chat with others in the program, or give feedback to Atlassian.View group Join the community to find out what other Atlassian users are discussing, debating and creating. I've created a Script Post-Function [ScriptRunner] Function that automatically creates Sub-tasks when a New issue is created in the project. Now, I want to apply a condition that will only create those Sub-tasks when a specific custom field is set to a specific value, i.e., Project Type (Custom Field) == Project Initiative (dropdown menu option). Any help would be greatly appreciated! Condition: Which is where I'm wanting to put only Project Type (Custom Field) = "Project Initiative". Target Issue Type: Sub-task Subtask Summary: PM Activities - Define/Design Code: issue.setDescription("For Project Managers to track and report their time against this Initiative. Please make sure to update the 'Original Estimate' field under the 'Time Tracking' tab to acurately estimate the time you'll spend supporting this Initiative.") MutableIssue myIssue = issue Project project = myIssue.getProjectObject() def theComponent = ComponentAccessor.getProjectComponentManager().findByComponentName(project.getId(),'PMO - PMs') if (theComponent == null) { theComponent = ComponentAccessor.getProjectComponentManager().create("PMO - PMs","","",0,project.getId()) } myIssue.setComponent([theComponent]) // set Original Estimate to 10d issue.setEstimate((Long) 288000).
https://community.atlassian.com/t5/Jira-Core-questions/Only-Create-a-Sub-task-for-a-Specific-Custom-Field-Value/qaq-p/1336169
CC-MAIN-2020-50
en
refinedweb
#include <iostream> #include <string> #include <algorithm> #include <vector> #include <list> #include <stdio.h> #include <stdint.h> using namespace std; int main() { list<string> rucksack; string input; cout << "which items would you like to put in your rucksack"; getline(cin, input); vector<string> keywords{"bow", "sword", "scales", "cloak"}; for(const auto& keyword : keywords) { auto pos = input.find(keyword); cout << .. Category : cl am having some issues with the LLDB debugger. If I create an array of std::vector with a literal (e.g. vector<int> a[3];), I can freely view it in the debugger variables. However, if I specify the length with a variable (e.g. int n = 3; vector<int> a[n];), the debugger displays "parent failed to evaluate: variable .. I have a C++ project described by a CMakeLists.txt file from which CMake successfully generates a MAKE file if I call CMake from the terminal. (This is on Ubuntu.) The project has dependencies on Boost and Eigen which are both installed on my system. I can see the Boost includes in /usr/include/boost, Boost binaries in .. I have created a new C++11 project on Clion and ran it locally which gave me no errors at all. But, when I ran it on an external server like this: g++ -std=c++11 -DNDEBUG -Wall *.cpp I got few error (which I was able to correct them later). My question is how can I prevent .. I am trying to include boost library to clion using the following CMakeLists.txt cmake_minimum_required(VERSION 3.16) project(Prototype3Relational) set(CMAKE_CXX_STANDARD 20) set(Boost_INCLUDE_DIR "F:Essentialboost_1_74_0") find_package(Boost) include_directories(${Boost_INCLUDE_DIR}) add_executable(Prototype3Relational main.cpp atomic_logic.h logic_engine.h F.h) but it seems to take forever to index. Most of which are irrelevant like the docs,examples, a few HTML files here and there and a lot . It .. I am trying to use windows boost 1.74.0 library in clion and all i get is errors. Maybe i am missing something in my CMakeLists.txt which looks as follows. cmake_minimum_required(VERSION 3.16) project(Prototype3Relational) set(CMAKE_CXX_STANDARD 20) set(Boost_INCLUDE_DIR "F:Essentialboost_1_74_0") find_package(Boost COMPONENTS lambda REQUIRED) include_directories(${Boost_INCLUDE_DIR}) add_executable(Prototype3Relational main.cpp atomic_logic.h logic_engine.h F.h) Every time i reload cmakeproject i get an error .. .. So, I have no code, just empty files and a CMake, but I keep getting that Linker Error. Can someone please explain in a lot of detail what my problem is? Some info I have is that I am supposed to be using Visual Studio 2015 as my compiler and stuff, which I think I .. please tell me what the problem may be, I wrote a program in C++ under Linux, but there was a problem with the execl () function, it gives the following error "No such file or directory", please tell me what I’m doing wrong? The code of the main program(the main file) where excel is called(the .. Recent Comments
https://windowsquestions.com/category/clion/
CC-MAIN-2020-50
en
refinedweb
3509/how-do-i-copy-a-file-in-python How do I copy a file in Python? I couldn't find anything under os. def main(): try: do main program stuff here .... except KeyboardInterrupt: print "Shutdown requested...exiting" except Exception: traceback.print_exc(file=sys.stdout) sys.exit(0) if __name__ == "__main__": main() copy a file in python from shutil import copyfile copyfile(src,dst) where src is copy the content of file name and dst is the destination location must be writable otherwise we will get error. file handling is the import concept of python in file handling we can do several function like creating file , reading file,updating file and deleting file and copy content file. Use the shutil module. copyfile(src, dst) Copy the contents ...READ MORE readline function help to read line in ...READ MORE Hey @David! TRy something like this: campaign_data = ...READ MORE Hi @Mike. First, read both the csv ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE yes, you can use "os.rename" for that. ...READ MORE You can try the below code which ...READ MORE connect mysql database with python import MySQLdb db = ...READ MORE down voteacceptedFor windows: you could use winsound.SND_ASYNC to play them ...READ MORE len() >>> mylist=[] >>> print len(mylist) 0 READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/3509/how-do-i-copy-a-file-in-python
CC-MAIN-2020-50
en
refinedweb
Key Takeaways - Based on interaction and communication style, we can group microservices into two groups: external-facing microservices and internal microservices. - RESTful APIs are the de facto communication technology for external-facing microservices (REST’s ubiquity and rich supporting ecosystem play a vital role in its continued success). -. - gRPC is supported by many major programming languages. We will discuss sample implementations by using Ballerinalang and Golang as the programming languages. In modern microservice architecture, we can categorize microservices into two main groups based on their interaction and communication. The first group of microservices acts as external-facing microservices, which are directly exposed to consumers. They are mainly HTTP-based APIs that use conventional text-based messaging payloads (JSON, XML, etc.) that are optimized for external developers, and use Representational State Transfer (REST) as the de facto communication technology. REST’s ubiquity and rich ecosystem play a vital role in the success of these external-facing microservices. OpenAPIprovides well-defined specifications for describing, producing, consuming, and visualizing these REST APIs. API management systems work well with these APIs and provide security, rate limiting, caching, and monetizing along with business requirements. GraphQL can be an alternative for the HTTP-based REST APIs but it is out of scope for this article. The other group of microservices are internal and don’t communicate with external systems or external developers. These microservices interact with each other to complete a given set of tasks. Internal microservices use either synchronous or asynchronous communication. In many cases, we can see the use of REST APIs over HTTP as a synchronous mode but that would not be the best technology to use. In this article, we will take a closer look at how we can leverage a binary protocol such as gRPC which can be an optimized communication protocol for inter-service communication What is gRPC? gRPC is a relatively new Remote Procedure Call (RPC) API paradigm for inter-service communications. Like all other RPCs, it allows directly invoking methods on a server application on a different machine as if it were a local object. Same as other binary protocols like Thrift and Avro, gRPC uses an interface description language (IDL) to define a service contract. gRPC uses HTTP/2, the latest network transport protocol, as the default transport protocol and this makes gRPC fast and robust compared to REST over HTTP/1.1. You can define the gRPC service contract by using Protocol Buffers where each service definition specifies the number of methods with the expected input and output messages with the data structure of the parameters and return types. Using major programming language provided tools, a server-side skeleton and client-side code (stub) can be generated using the same Protocol Buffers file which defines the service contract. A Pragmatic Microservices Use Case with gRPC Figure 1: A segment of an online retail shop microservices architecture One of the main benefits of microservice architecture is to build different services by using the most appropriate programming language rather than building everything in one language. Figure 1 illustrates a segment of an online retail shop microservice architecture, where four microservices are implemented in Ballerina (referred to as Ballerina in the rest of the article) and Golang working together to provide some functionality of the retail online shop. Since gRPC is supported by many major programming languages, when we define the service contracts, implementation can be carried out with a well-suited programming language. Let’s define service contracts for each service. syntax="proto3"; package retail_shop; service OrderService { rpc UpdateOrder(Item) returns (Order); } message Item { string itemNumber = 1; int32 quantity = 2; } message Order { string itemNumber = 1; int32 totalQuantity = 2; float subTotal = 3; Listing 1: Service contract for Order microservice (order.proto) The Order microservice will get shopping items and the quantity and return the subtotal. Here I use the Ballerina gRPC tool to generate a gRPC service boilerplate code and the stub/client respectively. $ ballerina grpc --mode service --input proto/order.proto --output gen_code This generates the OrderService server boilerplate code. import ballerina/grpc; listener grpc:Listener ep = new (9090); service OrderService on ep { resource function UpdateOrder(grpc:Caller caller, Item value) { // Implementation goes here. // You should return an Order } } public type Order record {| string itemNumber = ""; int totalQuantity = 0; float subTotal = 0.0; |}; public type Item record {| string itemNumber = ""; int quantity = 0; |}; Listing 2: Code snippet of the generated boilerplate code (OrderService_sample_service.bal) gRPC service is perfectly mapped to Ballerina’s service type, gRPC rpc mapped to Ballerina’s resource function and the gRPC messages are mapped to the Ballerina record type. I have created a separate Ballerina project for the Order microservice and used the generated OrderService boilerplate code to implement a gRPC unary service. Unary Blocking OrderService is called in the Cart microservice. We can use the following Ballerina command to generate the client stub and client code. $ ballerina grpc --mode client --input proto/order.proto --output gen_code The generated client stub has both blocking and non-blocking remote methods. This sample code demonstrates how the gRPC unary service interacts with the gRPC blocking client. public remote function UpdateOrder(Item req, grpc:Headers? headers = ()) returns ([Order, grpc:Headers]|grpc:Error) { var payload = check self.grpcClient->blockingExecute("retail_shop.OrderService/UpdateOrder", req, headers); grpc:Headers resHeaders = new; anydata result = (); [result, resHeaders] = payload; return [<Order>result, resHeaders]; } }; Listing 3: Generated remote object code snippet for blocking mode Ballerina’s remote method abstraction is a nicely fitted gRPC client stub and you can see how the UpdateOrder invocation code is very clean and neat. The Checkout microservice issues the final bill by aggregating all interim orders received from the Cart microservice. In this case, we are going to send all interim orders as stream Order messages. syntax="proto3"; package retail_shop; service CheckoutService { rpc Checkout(stream Order) returns (FinalBill) {} } message Order { string itemNumber = 1; int32 totalQuantity = 2; float subTotal = 3; } message FinalBill { float total = 1; } Listing 4: Service contract for Checkout microservice (checkout.proto) You can use the ballerina grpc command to generate boilerplate code for checkout.proto. $ ballerina grpc --mode service --input proto/checkout.proto --output gen_code gRPC Client Streaming The Cart microservices (client) streamed messages are made available as a stream object argument, which can be iterated through using a loop, processing each message sent by the client. See the following sample implementation: service CheckoutService on ep { resource function Checkout(grpc:Caller caller, stream<Order,error> clientStream) { float totalBill = 0; //Iterating through streamed messages here error? e = clientStream.forEach(function(Order order) { totalBill += order.subTotal; }); //Once the client completes stream, a grpc:EOS error is returned to indicate it if (e is grpc:EOS) { FinalBill finalBill = { total:totalBill }; //Sending the total bill to the client grpc:Error? result = caller->send(finalBill); if (result is grpc:Error) { log:printError("Error occurred when sending the Finalbill: " + result.message() + " - " + <string>result.detail()["message"]); } else { log:printInfo ("Sending Final Bill Total: " + finalBill.total.toString()); } result = caller->complete(); if (result is grpc:Error) { log:printError("Error occurred when closing the connection: " + result.message() +" - " + <string>result.detail()["message"]); } } //If the client sends an error instead it can be handled here else if (e is grpc:Error) { log:printError("An unexpected error occured: " + e.message() + " - " + <string>e.detail()["message"]); } } } Listing 5: Service code snippet of the sample implementation for Once the client stream has completed, a grpc:EOS error is returned that can be used to identify when to send the final response message (aggregated total) to the client using the caller object. The sample client code and client stub for the CheckoutService can be generated by using the following command: $ ballerina grpc --mode client --input proto/checkout.proto --output gen_code Let’s look at the Cart microservice implementation. The Cart microservice has two REST APIs—one to add items to the cart and another to do the final checkout. When adding items to the cart, it will get an interim order with a subtotal for each item by doing a gRPC call to the Order microservice and storing it in memory. Calling the Checkout microservice will send all in-memory stored interim orders to the Checkout microservice as a gRPC stream and return the total amount to pay. Ballerina uses the built-in Stream type and Client Object abstractions to implement the gRPC client streaming. See Figure 2, which illustrates how Ballerina’s client streaming works. Figure 2: Ballerina gRPC client streaming The full implementation of CheckoutService client streaming can be found in the Cart microservice checkout resource function. Finally, in the checkout process, do a gRPC call to the Stock microservice which is implemented in Golang, and update the stock by deducting sold items. grpc-gateway syntax="proto3"; package retail_shop; option go_package = "../stock;gen"; import "google/api/annotations.proto"; service StockService { rpc UpdateStock(UpdateStockRequest) returns (Stock) { option (google.api.http) = { // Route to this method from POST requests to /api/v1/stock put: "/api/v1/stock" body: "*" }; } } message UpdateStockRequest { string itemNumber = 1; int32 quantity = 2; } message Stock { string itemNumber = 1; int32 quantity = 2; Listing 6: Service contract for Stock microservice (stock.proto) In this scenario, the same UpdateStock service will be invoked by using a REST API call as an external-facing API and also invoked by using a gRPC call as an inter-service call. grpc-gateway is a plugin of protoc, which reads a gRPC service definition and generates a reverse-proxy server that translates a RESTful JSON API into gRPC. Figure 3: grpc-gateway grpc-gateway helps you to provide your APIs in both gRPC and REST style at the same time. The following command generates the Golang gRPC stubs: protoc -I/usr/local/include -I. \ -I$GOROOT/src \ -I$GOROOT/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \ --go_out=plugins=grpc:. \ stock.proto The following command generates the Golang grpc-gateway code: protoc -I/usr/local/include -I. \ -I$GOROOT/src \ -I$GOROOT/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \ --grpc-gateway_out=logtostderr=true:. \ stock.proto The following command generates the stock.swagger.json: protoc -I/usr/local/include -I. \ -I$GOROOT/src \ -I$GOROOT/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \ -I$GOROOT/src \ --swagger_out=logtostderr=true:../stock/gen/. \ ./stock.proto Sample Run Clone the microservices-with-grpc git repo and follow the README.md instructions. Conclusion gRPC is relatively new, but its fast-growing ecosystem and community will definitely make an impact in microservice development. Since gRPC is an open standard, all mainstream programming languages support it, making it ideal to work in a polyglot microservice environment. As a general practice, we can use gRPC for all synchronous communications between internal microservices, and also we can expose it as REST-style APIs by using emerging technology like grpc-gateway. In addition to what we discussed in this article, gRPC features like Deadlines, Cancellation, Channels, and xDS support will provide immense power and flexibility to developers to build effective microservices. More Resources To read and learn more about Ballerina gRPC support, check out the following links: Golang has comprehensive gRPC support and we can extend these microservices to enhance security, robustness, and resiliency by using gRPC Interceptor, Deadlines, Cancellation, and Channels among other things. Check out the grpc-go git repo which has many working samples on these concepts. Recommended videos: - Generating Unified APIs with Protocol Buffers and gRPC - Writing REST Services for the gRPC curious - Using gRPC for Long-lived and Streaming RPCs
https://www.infoq.com/articles/microservices-grpc-ballerina-go/?topicPageSponsorship=0bdfd03e-dfe7-4dfd-aabe-7bd66eedfb31&itm_source=articles_about_microservices&itm_medium=link&itm_campaign=microservices
CC-MAIN-2020-50
en
refinedweb
Groovy Web Service Long cherished dream of mine, reverberating through the darker corners of my innermost thoughts... figuring out how to consume a web service in Groovy. "A web service? In Groovy? That must mean you use the same standard Java libraries for JAX-WS, or JAX-RPC, generate client stubs and then use them to connect to the web service, right?" Wrong. Forget stubs. Groovy provides its own library for web services. Just to simplify the life of developers, since it is incredibly lightweight and gets the job done painlessly. And there are no stubs. Everything, though slightly out of date, is described here: I tried the final example, with success. Here's my Groovy script: import groovyx.net.ws.WSClient class TryIt { groovy.swing.SwingBuilder swing = new groovy.swing.SwingBuilder() def proxy = new WSClient("", TryIt.class.classLoader) def currency = ['USD', 'EUR', 'CAD', 'GBP', 'AUD', 'SGD'] def rate = 0.0 void main() { def refresh = swing.action( name:'Refresh', closure:this.&refreshText, mnemonic:'R' ) def } } It's incredible that this is literally all the code that you need. Nothing more in any shape or form. No configuration files, no stubs, no XML, no anything else. The above is almost the same as in the original document referred to above, but slightly tweaked (e.g., the 'def' keyword had been omitted in a few places). When I call the above class from a Java class (since NetBeans IDE doesn't support the running of Groovy classes, just Groovy scripts), the following Swing form appears, created from the SwingBuilder code above: Then I enter a number (in the above case, I typed '500.00') and press Refresh. A number that doesn't make much sense to me returns, but that's how things go with web services over which you have no control. They're just black boxes, spewing something back to you upon request: To set this up in NetBeans IDE, apart from creating the Groovy class in a Groovy file and calling it from a Java class, you need to be aware of the following: - Make sure to include the groovyws JAR when you compile, otherwise compilation will fail, since you're using Groovy's WSClient class: <target name="groovyc" description="groovyc"> <taskdef name="groovyc" classpath="lib/groovy-all-1.1-rc-2-SNAPSHOT.jar" classname="org.codehaus.groovy.ant.Groovyc"/> <groovyc srcdir="${src.dir}" destdir="groovy"> <classpath path="lib/groovy-all-1.1-rc-2-SNAPSHOT.jar"/> <classpath path="lib/groovyws-all-0.1.jar"/> </groovyc> </target> I can't remember where I got that JAR from. I googled a lot and found it referred to somewhere, after groovyws-standalone.jar turned out to not include everything I needed. (But maybe I'm wrong, I'll check this.) Now that you know it is called groovyws-all-0.1.jar, you should be able to find it. (Would be cool if it were bundled with the standard Groovy distribution.) - If you're using JDK 6, you need to set the endorsed dir, in the Run panel, in the Project Properties dialog box: -Djava.endorsed.dirs=copy the value of jaxws.endorsed.dir from nbproject/private.properties - You need ant-1.7.0.jar and the Groovy 1.1 JAR (or some other version of the Groovy 1.1 JAR) in your application's Libraries node. And, I think, that's it. One thing thing I'm going to try is to get the Daily Dilbert web service to return comics via Groovy, as above, although I suspect I may end up in trouble with the images. But, before beginning that, I did a bit of tweaking, and not much later I now have the world's simplest web service client: import groovyx.net.ws.WSClient class TryIt { groovy.swing.SwingBuilder swing = new groovy.swing.SwingBuilder() def proxy = new WSClient("", TryIt.class.classLoader) void main() { def frame = swing.frame(title:'Thought for the Day') { panel { label(proxy.ForToday()) } } frame.pack() frame.show() } } The above (which is NOT a snippet, it is the entire web service client), results in this when I run it: And here's one that's a bit more interactive, similar to the first one, but this time sending snippets of Shakespeare to a web service in order to retrieve full speeches, which has been referred to several times before in this blog: import groovyx.net.ws.WSClient import java.awt.BorderLayout class TryIt { groovy.swing.SwingBuilder swing = new groovy.swing.SwingBuilder() def proxy = new WSClient("", TryIt.class.classLoader) void main() { def frame = swing.frame(title:'Shakespeare',size:[300,300]) { panel(layout: new BorderLayout()) { textField(id:'quote',constraints: BorderLayout.CENTER, "fair is foul") textArea (id:'area',constraints: BorderLayout.NORTH, proxy.GetSpeech(swing.quote.text).replaceAll("><",">\n <")) button(constraints: BorderLayout.SOUTH,"Search",action:refresh) } } frame.pack() frame.show() } def refresh = swing.action( name:'Refresh', closure:this.&refreshText, mnemonic:'R' ) def refreshText(event) { def newQuote = proxy.GetSpeech(swing.quote.text) swing.area.text = newQuote.replaceAll("><",">\n <") } } Clearly, it's all really cool and lightweight. I can imagine it can be very useful for doing quick tests as part of a larger process. But the above would make sense in a production environment too, I reckon. Why consume web services the hard way if you can do it the easy way? For all details on this, see the aforementioned page, which seems to be the only one that describes this cool Groovy feature. In other news. Put 13949712720901ForOSX in your blog to let Apple know that you want Java 6 support in Mac OS, as described here! And then go here to see all the other people who have already done so... Nov 03 2007, 12:41:45 PM PDT Permalink Nice post. Remember to register your RSS Feed on Groovy Blogs (), this way it will be available for the Groovy community. Posted by Antonio Goncalves on November 04, 2007 at 05:14 AM PST # Have you tried this with the terra service? I get an error as soon as I define the proxy with that WSDL; INFO: Created classes: com.terraserver_usa.terraserver.AreaBoundingBox, com.terraserver_usa.terraserver.AreaCoordinate, ... com.terraserver_usa.terraserver.UtmPt org.apache.cxf.service.factory.ServiceConstructionException at org.apache.cxf.endpoint.dynamic.TypeClassInitializer.begin(TypeClassInitializer.java:94) at org.apache.cxf.service.ServiceModelVisitor.visitOperation(ServiceModelVisitor.java:74) at org.apache.cxf.service.ServiceModelVisitor.visitOperation(ServiceModelVisitor.java:95) at org.apache.cxf.service.ServiceModelVisitor.walk(ServiceModelVisitor.java:48) at org.apache.cxf.endpoint.dynamic.DynamicClientFactory.createClient(DynamicClientFactory.java:250) at org.apache.cxf.endpoint.dynamic.DynamicClientFactory.createClient(DynamicClientFactory.java:138) at groovyx.net.ws.WSClient.<init>(WSClient.java:96) org.codehaus.groovy.runtime.MetaClassHelper.doConstructorInvoke(MetaClassHelper.java:562) at groovy.lang.MetaClassImpl.doConstructorInvoke(MetaClassImpl.java:1756) at groovy.lang.MetaClassImpl.invokeConstructor(MetaClassImpl.java:758) at groovy.lang.MetaClassImpl.invokeConstructor(MetaClassImpl.java:688) at org.codehaus.groovy.runtime.Invoker.invokeConstructorOf(Invoker.java:163) at org.codehaus.groovy.runtime.InvokerHelper.invokeConstructorOf(InvokerHelper.java:140) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeNewN(ScriptBytecodeAdapter.java:243) at wsterra.main(wsterra.groovy:7) Caused by: java.lang.ClassNotFoundException: byte[] at java.net.URLClassLoader$1.run(URLClassLoader.java:200) Posted by sean d on November 07, 2007 at 09:46 AM PST # I tried to connect to local Jira, but no luck... I got exception with creating proxy: java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:98) at java.util.XMLUtils.getLoadingDoc(XMLUtils.java:75) at java.util.XMLUtils.load(XMLUtils.java:57) at java.util.Properties.loadFromXML(Properties.java:701) at org.apache.cxf.common.util.PropertiesLoaderUtils.loadAllProperties(PropertiesLoaderUtils.java:71) at org.apache.cxf.wsdl11.WSDLManagerImpl.registerInitialExtensions(WSDLManagerImpl.java:209) at org.apache.cxf.wsdl11.WSDLManagerImpl.<init>(WSDLManagerImpl.java:97) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) Posted by mare on November 12, 2007 at 01:05 PM PST # I found out I'm not the only one: Posted by mare on November 12, 2007 at 01:16 PM PST # I put some examples of using GroovyWS with Terra on GrrovyWS web sites Posted by tog on November 25, 2007 at 09:15 AM PST # I am very interested in GroovyWS and your blog is very helpful to me, but I can't find groovyws-all-0.1 jar in the web. Could you help me to locate it? Best regards, Gacgde Posted by gacgde on November 30, 2007 at 06:25 AM PST # Just go here and download it there, it has a different name now, but does the same: Posted by Geertjan on November 30, 2007 at 06:28 AM PST # Thank you for the link. But I get this error message when connecting to a service somebody has designed in my company: Unable to create JAXBContext for generated packages: "generated" doesn’t contain ObjectFactory.class or jaxb.index. Before this error message, in the list of generated classes, the right name DataModel.ObjectFactory appears. Then why does JAXB 2.1 looks for a "generated" class? Any idea? I have tested with Java 1.5 or 1.6, Groovy 1.1.rc2 or rc3. Thanks, Gacgde Posted by JAXB 2.1 issue? on November 30, 2007 at 12:34 PM PST # Geertjan, I'm getting the same problem : AXBContext for generated packages: "generated" doesn’t contain ObjectFactory.class or jaxb.index. Did you find a solution ? thanks, Tom Posted by Tom Duerr on January 29, 2008 at 07:38 AM PST # I tried using GroovyWS in linux and it works perfect, but when i try to use it with Windows, I get the following error: [ERROR] IOException during exec() of compiler "javac". Check your path environment variable. My JAVA_HOME and GROOVY_HOME are both set and JAVA_HOME\bin is in my path. Any suggestions? Posted by Schwame on March 28, 2008 at 09:20 AM PDT # I tried to get the BookService (from the Groovy site) to work. I'm getting the following exception in client code: Exception thrown: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'Groovy in Action' with class 'java.lang.String' to class 'javax.xml.bind.JAXBElement' Using Groovy 1.5.5 and groovyws-standalone-0.3.1.jar I can get simple WS to work fine. This is occuring when the code is trying to set the title of the book. This seems to be a problem with using the Book class. I removed the "defaultnamespace" and got a Book object that worked until the addBook method was called. It would fail silently and never call the service. All of the code is in the same dir on the same machine. Posted by Dale Frye on April 25, 2008 at 02:01 PM PDT # Hi Geertjan, do you have any experience with webservice authentication? I try to get a webservice running with authentication but I get alway this error: No such property: user for class: groovyx.net.ws.WSClient WSClient client = new WSClient("", this.class.classLoader) client.user="user" client.password="password" I don´t find a running example for this on the web Thx Alwin Posted by Alwin on May 07, 2008 at 01:31 AM PDT # Hi Tom, i had the same problem trying the example using windows. The solution can be found reading the error message: "Check your path environment variable." After adding the jdk/bin directory to the system path - all works great. Could it be that your JAVA_HOME points to a JRE instead of a JDK? Posted by dirksan on June 05, 2008 at 12:30 PM PDT # Hi Geertjan, Im trying to use ur code in a Desktop application ..u r using def is this a class or interface ,and reply with breif about its imports. and little information about proxy.ConversionRate() method Thanks in advance with regards vivek.k Posted by vivek on August 07, 2008 at 04:19 AM PDT # I am having an error when I try to use a local WSDL and a local schema with local paths, all in the same directory ... Any idea? The code ... def proxy = new WSClient("SLCATALOG.wsdl", this.class.classLoader) def cats = proxy.QuerySLCATALOG() for (cat in cats) println cat The WSDL ... <definitions xmlns="" xmlns: <types> <xsd:schema> <xsd:import </xsd:schema> < ....> The error ... java.lang.RuntimeException: Error compiling schema from WSDL at {SLCATALOG.wsdl}: Unable to resolve relative URI SLCATALOGService.xsd because base URI is not absolute: SLCATALOG.wsdl#types1 at org.apache.cxf.endpoint.dynamic.DynamicClientFactory$InnerErrorListener.error(DynamicClientFactory.java:418) at com.sun.tools.xjc.api.impl.s2j.SchemaCompilerImpl.error(SchemaCompilerImpl.java:280) at com.sun.tools.xjc.util.ErrorReceiverFilter.error(ErrorReceiverFilter.java:77) at com.sun.xml.xsom.impl.parser.ParserContext$2.error(ParserContext.java:166) at com.sun.xml.xsom.impl.parser.NGCCRuntimeEx.resolveRelativeURL(NGCCRuntimeEx.java:179) at com.su ..... Posted by Heiko Ludwig on September 23, 2008 at 12:24 AM PDT # I wanted to test the groovy-code of the 'TerraServer-USA by Microsoft' example. But invoking the proxy gives the folloing error: 30.09.2008 20:56:34 org.apache.cxf.endpoint.dynamic.DynamicClientFactory outputDebug INFO: Created classes: com.terraserver_usa.terraserver.AreaBoundingBox, com.terraserver_usa.terraserver.AreaCoordinate, com.terraserver_usa.terraserver.ArrayOfOverlappingThemeInfo, com.terraserver_usa.terraserver.ArrayOfPlaceFacts, com.terraserver_usa.terraserver.ArrayOfThemeBoundingBox, com.terraserver_usa.terraserver.ConvertLonLatPtToNearestPlace, com.terraserver_usa.terraserver.ConvertLonLatPtToNearestPlaceResponse, com.terraserver_usa.terraserver.ConvertLonLatPtToUtmPt, com.terraserver_usa.terraserver.ConvertLonLatPtToUtmPtResponse, com.terraserver_usa.terraserver.ConvertPlaceToLonLatPt, com.terraserver_usa.terraserver.ConvertPlaceToLonLatPtResponse, com.terraserver_usa.terraserver.ConvertUtmPtToLonLatPt, com.terraserver_usa.terraserver.ConvertUtmPtToLonLatPtResponse, com.terraserver_usa.terraserver.CountPlacesInRect, com.terraserver_usa.terraserver.CountPlacesInRectResponse, com.terraserver_usa.terraserver.GetAreaFromPt, com.terraserver_usa.terraserver.GetAreaFromPtResponse, com.terraserver_usa.terraserver.GetAreaFromRect, com.terraserver_usa.terraserver.GetAreaFromRectResponse, com.terraserver_usa.terraserver.GetAreaFromTileId, com.terraserver_usa.terraserver.GetAreaFromTileIdResponse, com.terraserver_usa.terraserver.GetLatLonMetrics, com.terraserver_usa.terraserver.GetLatLonMetricsResponse, com.terraserver_usa.terraserver.GetPlaceFacts, com.terraserver_usa.terraserver.GetPlaceFactsResponse, com.terraserver_usa.terraserver.GetPlaceList, com.terraserver_usa.terraserver.GetPlaceListInRect, com.terraserver_usa.terraserver.GetPlaceListInRectResponse, com.terraserver_usa.terraserver.GetPlaceListResponse, com.terraserver_usa.terraserver.GetTheme, com.terraserver_usa.terraserver.GetThemeResponse, com.terraserver_usa.terraserver.GetTile, com.terraserver_usa.terraserver.GetTileMetaFromLonLatPt, com.terraserver_usa.terraserver.GetTileMetaFromLonLatPtResponse, com.terraserver_usa.terraserver.GetTileMetaFromTileId, com.terraserver_usa.terraserver.GetTileMetaFromTileIdResponse, com.terraserver_usa.terraserver.GetTileResponse, com.terraserver_usa.terraserver.LonLatPt, com.terraserver_usa.terraserver.LonLatPtOffset, com.terraserver_usa.terraserver.ObjectFactory, com.terraserver_usa.terraserver.OverlappingThemeInfo, com.terraserver_usa.terraserver.Place, com.terraserver_usa.terraserver.PlaceFacts, com.terraserver_usa.terraserver.PlaceType, com.terraserver_usa.terraserver.ProjectionType, com.terraserver_usa.terraserver.Scale, com.terraserver_usa.terraserver.Theme, com.terraserver_usa.terraserver.ThemeBoundingBox, com.terraserver_usa.terraserver.ThemeInfo, com.terraserver_usa.terraserver.TileId, com.terraserver_usa.terraserver.TileMeta, com.terraserver_usa.terraserver.UtmPt BindingInfo = org.apache.cxf.binding.soap.model.SoapBindingInfo@1d183b7 o = SOAPBinding ({}binding): required=null transportURI= style=document Caught: java.lang.NoSuchMethodError: org.codehaus.groovy.runtime.InvokerHelper.asArray(Ljava/lang/Object;) [Ljava/lang/Object; at wsTest.run(wsTest.groovy:15) at wsTest.main(wsTest.groovy) I've tried the 0.3.1 version of both the standalone and the all-jar with the same result. (I'm using Groovy Version: 1.5.0 JVM: 10.0-b23) I would very much appreciate any hints about this problem! Posted by Wyss Remo on September 30, 2008 at 12:25 PM PDT # See here for help, Wyss: Posted by Geertjan on September 30, 2008 at 12:42 PM PDT # I don't understand why your main methods are void main() they are normally static main() in Groovy or better yet just omit the main method and put those statements outside the class definition altogether as is done here: Posted by Jeremy Leipzig on October 02, 2008 at 07:00 AM PDT # Has anyone had any success getting GroovyWS to work with attachments? It seems fine for simple WS requests, but when I try to call a WS and include an attachment I get an error starting like this: 16/10/2008 16:24:22 org.apache.cxf.phase.PhaseInterceptorChain doIntercept INFO: Interceptor has thrown exception, unwinding now org.apache.cxf.interceptor.Fault: Marshalling Error: java.lang.NullPointerException at org.apache.cxf.jaxb.JAXBEncoderDecoder.marshall(JAXBEncoderDecoder.java:174) at org.apache.cxf.jaxb.io.DataWriterImpl.write(DataWriterImpl.java:131) Posted by Rick on October 16, 2008 at 12:05 AM PDT # >Unable to create JAXBContext for generated >packages: "generated" doesn’t contain >ObjectFactory.class or jaxb.index. This is was a bug in Apache CXF, but I am not sure if the fix has yet propagated to Groovy WSClient. I suppose you could rebuild it from source but who has time for that? Posted by Jeremy Leipzig on October 31, 2008 at 11:42 AM PDT # Hi, I'm trying GroovyWS with the examples on the groovy site (). I'm using the groovyConsole to try groovyWS, I've donwloaded the groovyws-standalone-0.4.jar and put it into C:\groovy-1.0\lib, and everytime I try to run an example I get en error on proxy.create(), what I'm doing wrong? thanks! [TerraServer-USA by Microsoft example] java.lang.NullPointerException at groovyx.net.ws.WSClient.create(WSClient.java:254) at gjdk.groovyx.net.ws.WSClient org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethod0(ScriptBytecodeAdapter.java:211) at Script11.run(Script11:4) at groovy.lang.GroovyShell.evaluate(GroovyShell.java:484) at groovy.lang.GroovyShell.evaluate(GroovyShell.java:425) at gjdk.groovy.lang.GroovyShell groovy.ui.Console$_runScript_closure10.doCall(Console.groovy org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnCurrentN(ScriptBytecodeAdapter.java:97) at groovy.ui.Console$_runScript_closure10.doCall(Console.gro groovy.lang.Closure.call(Closure.java:188) at groovy.lang.Closure.call(Closure.java:183) at groovy.lang.Closure.run(Closure.java:264) at java.lang.Thread.run(Thread.java:595) Posted by pablo on November 30, 2008 at 09:23 AM PST # Pablo, need a proxy.create() after WSClient instantiation (new CXF version impose it) And if you're using JDK6 you don't need to set the endorsed dir. Posted by Manu on January 07, 2009 at 01:15 AM PST # Yo can download SoaMoa unde and let create the groovy script Posted by Ridvan Yesiltepe on February 22, 2009 at 10:23 AM PST # From where can I get the source code of groovyWS0.4 version to debug? I only see 0.2 and 0.3 in the SVN repository under branches and nothing under tags. I am getting NPE when I use proxy.initialize(). Am I missing something? My code looks like - def wsdl = "" def proxy = new WSClient(wsdl, this.class.classLoader) proxy.inilialize() Exception - java.lang.NullPointerException at groovyx.net.ws.WSClient.invokeMethod(WSClient.java:69) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:43) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120) at Test2Controller$_closure1.doCall(Test2Controller.groovy:17) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) Posted by Meeta on April 07, 2009 at 09:19 AM PDT # @Meeta - try replacing proxy.initialize() with proxy.create() - I think the initialized method disapeard I am also playing with groovyws as client, but I am currently stuck at method/class creation. my service provides a method that returns a list of Strings. The method is generated and I can invoke it, but it always returns a String (actually its always the first element of the list it should return). The returned value is of type java.lang.String and not java.util.List. Any ideas ? I tested the WS with a JAX-WS based client and that one generates the method properly and the returned values is the expected List but I would really like to have the dynamic class generation at runtime :) any help would be appreciated cheers, thasso Posted by Thasso Griebel on April 09, 2009 at 08:50 AM PDT # Posted by The Next Radio on April 09, 2009 at 03:23 PM PDT # Posted by The Next Radio on April 16, 2009 at 05:42 AM PDT # Posted by The Next Radio on May 05, 2009 at 02:04 PM PDT # Posted by The Next Radio on May 05, 2009 at 02:05 PM PDT # Posted by The Next Radio on May 05, 2009 at 02:49 PM PDT # Posted by The Next Radio on May 06, 2009 at 10:29 AM PDT # Posted by The Next Radio on May 07, 2009 at 05:38 AM PDT # Posted by The Next Radio on May 07, 2009 at 05:38 AM PDT # Posted by The Next Radio on May 07, 2009 at 06:21 AM PDT # Posted by The Next Radio on May 07, 2009 at 10:28 AM PDT # Posted by The Next Radio on May 07, 2009 at 10:29 AM PDT # Posted by The Next Radio on May 20, 2009 at 01:57 PM PDT # Posted by The Next Radio on May 20, 2009 at 01:57 PM PDT # Posted by The Next Radio on May 20, 2009 at 01:59 PM PDT # Posted by The Next Radio on May 20, 2009 at 02:19 PM PDT # Posted by The Next Radio on May 21, 2009 at 07:42 AM PDT # PROBLEM ------- I also get the error that "sean d" reported above. I am trying to call the w3schools temperature conversion web service from behind an HTTP proxy. When I call it, I get the exception: org.apache.cxf.service.factory.ServiceConstructionException: Could not resolve URL "". I will try again from my home, where there is no HTTP proxy. CODE ---- import groovyx.net.ws.WSClient def proxy = new WSClient("", this.class.classLoader) proxy.setProxyProperties( [ proxyHost:"proxy.ghc.org", proxyPort:"8080", "proxy.user":"user", "proxy.password":"password" ] ) proxy.initialize() println "You are probably freezing at ${proxy.CelsiusToFahrenheit(0)} degrees Farhenheit" ENVIRONMENT ----------- Groovy 1.6.3 JARS ---- groovyws-minimal-0.5.0.jar cxf-2.1.5.jar geronimo-activation_1.1_spec-1.0.2.jar geronimo-annotation_1.0_spec-1.1.1.jar geronimo-javamail_1.4_spec-1.3.jar geronimo-stax-api_1.0_spec-1.0.1.jar jaxb-api-2.1.jar jaxb-impl-2.1.9.jar junk.txt neethi-2.0.4.jar wsdl4j-1.6.2.jar wstx-asl-3.2.6.jar xml-resolver-1.2.jar XmlSchema-1.4.5.jar NOTES ----- * All jars, except for groovyws-minimal-0.5.0.jar, are specified by CXF 2.1.5's WHICH_JARS file. * I can access the WSDL URL via my web browser. * I use the same proxy parameters in my Maven settings.xml file. * proxy.initialize() is the correct way to initialize the client. Using proxy.create() causes a NPE, because there is no no argument create() method, unlike what "Manu" and "Thasso Griebel" said above. Posted by devdanke on May 27, 2009 at 04:15 PM PDT # Posted by The Next Radio on June 05, 2009 at 10:06 AM PDT # Posted by The Next Radio on June 05, 2009 at 11:11 AM PDT #
http://blogs.sun.com/geertjan/entry/groovy_web_service
crawl-002
en
refinedweb
Today's Page Hits: 472 Yes, JDK 6 has been released today. As many of you know already, scripting is one of the important features of JDK 6. Scripting API is in the javax.script package which is specified by JSR-223. It is very simple API to use scripting languages from Java code. To use scripting language from your Java code, you need to have JSR-223 compliant "script engine" - i.e., implementation of the javax.script API for your language of choice. Sun's implementation of JDK 6 comes with JavaScript script engine - which is based on Mozilla's Rhino implementation. So, you can "eval" JavaScript code from your Java code! Scripting API helps you javax.script.ScriptEngineManager. Please note that to use any language other than JavaScript [which is bundled with JDK 6], you need script engine implementation for your language. You can download jsr-223 script engine most popular scripting languages such as Groovy, JRuby and many other languages from scripting.dev.java.net javax.script.ScriptEnginefor your scripting language, you just evaluate code in that language by calling, guess what, by calling evalmethods. You can evaluate script code from a String or from a java.io.Reader. javax.script.ScriptEnginehas putand getmethods to expose Java objects as "global" variables to script. In addition, there are interfaces such as javax.script.Bindings[Bindings is "scope" - a set of name, value pairs] and javax.script.ScriptContextmay be used for finer control. The later is used to support one or more scopes in the script global namespace. javax.script.Invocable[yes, the bundled JavaScript engine and most engines at scripting.dev.java.net support this optional interface], you can use it to call a specific script function from Java code. This may be used to call script function, say for example, from a user interface event handler. javax.script.Invocablefor this purpose as well. Simple main class to evaluate JavaScript to print "hello world": There are two samples in JDK installation:There are two samples in JDK installation: import javax.script.*; public class Main { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine jsEngine = manager.getEngineByName("JavaScript"); jsEngine.eval("println('hello world')"); } } If you are interested in using scripting on the serverside code, you may want to look at the Phobos project. This project uses JSR-223 scripting feature. Posted by scriptics on December 14, 2006 at 12:45 PM IST #
http://blogs.sun.com/sundararajan/entry/the_horse_starts_running_jdk
crawl-002
en
refinedweb
A SAX Parser Based on JavaScript's String.replace() Method?. A native SAX implementation in JavaScript would for example let you grab data from RSS feeds over Ajax without loading the entire RSS document into a DOM tree. Or, assuming your XHTML was well-formed, it would let you rapidly query the current document. (Although it wouldn't be able to return references to existing DOM nodes.). (YMMV depending on what browser you use.) Try it out on this simple XML fragment: The SAX function looks like this: doSax(stringToParse,doStartTag,doEndTag,doAttribute,doText); The callback functions for this example are: function doStartTag(name){alert("opening tag: "+name);} function doEndTag(name){alert("closing tag: "+name);} function doAttribute(name,val){alert("attribute: "+name+'="'+val+'"');} function doText(str){ str=str.normalize(); if(!str){str='[whitespace]';} alert("encountered text node: "+str); } And here's the code: sax.js. I think that with a little work (i.e. the ability to handle namespaces, comments, and other declarations) this could potentially be usable--maybe not as a full-fledged SAX parser--but a quick and dirty utility for reading XML via Ajax. Hmm, a tag soup SAX-style parser might be nice to have too. Nice job! Posted by Jose on April 07, 2008 at 06:55 AM MDT #
http://blogs.sun.com/greimer/entry/a_sax_parser_built_on
crawl-002
en
refinedweb
Solaris sockets, past and present Prior to Solaris 2.6, sockets were an abstraction that existed at the library level. That is, much of the socket state and socket semantics support were provided within the libsocket library. The kernel's view of a process's socket connection entailed a file descriptor and linkage to a Stream head, which provided the path to the underlying transport. The disparity between the library socket state and the kernel's view was one of several reasons a new implementation was introduced in Solaris 2.6. To provide a relevant basis for comparison, we'll start by looking at what happens in the pre-Solaris 2.6 release (that is, releases up to and including Solaris 2.5.1) when a socket is created. The major software layers are shown in Figure 1 for reference. The primary software components are the socket library and the sockmod Streams module. The specfs layer is shown for completeness and is part of the layering, due to the use of pseudodevices as an entry point into the networking layers. To digress for a moment, the special filesystem, specfs, came out of SVR4 Unix as a means of addressing the issue of device special files that exist on Unix on-disk filesystems (e.g., UFS). Unix systems have always abstracted I/O (input/output) devices through device special files. The /dev directory namespace stores files that represent physical devices and pseudodevices on the system. Using device major numbers, those device files provide an entry point into the appropriate device driver, and using minor numbers, they are able to uniquely identify one of potentially many devices of the same type. (That is something of an oversimplification, but is sufficient for our purpose here in describing specfs.) The /dev directory resides on the root filesystem, which is an instance of UFS. As such, references to the filesystem and its files and directories are handled using the UFS filesystem operations and UFS file operations. That is usually sufficient, but is not desired behavior for device special files. I/O to a device special file requires entry into a device driver. That is, issuing an open(2) system call on /dev/rmt/0 means someone wishes to open the tape device represented by /dev/rmt/0, thereby entering the appropriate driver's xx_open() routine. As a file on a UFS filesystem, the typical open routine called would be the ufs_open() code, but that's not what we want for devices. The specfs filesystem was designed to address such situations; it provides a straightforward mechanism for linking the underlying structures for file support in the kernel to the required device driver interfaces. Like all filesystems in Solaris (and any SVR4-based Unix) it's based on the VFS/vnode infrastructure. (See Solaris Internals and UNIX Internals in the Resources section for detailed information on VFS.) Getting back to sockets in Solaris 2.5.1, the specfs layer!
http://www.itworld.com/swol-0309-insidesolaris
crawl-002
en
refinedweb
Have you met your new favorite LDAP directory, OpenDS? Oh, you haven't? Well, dude. Let me make some introductions. Introducing the first stable release of the OpenDS Project, OpenDS 1.0.0! You've got an awesome package with this one, folks. OpenDS promises: OpenDS is an open source LDAP directory written in - you guessed it - 100% Java. - Maximum, extensible interoperability with LDAP client apps - Directory-related extras, such as directory proxy, virtual directory, namespace distribution and data synchronization - Ability to embed the server in other Java apps What's really cool is that, with the Java WebStart installer Quick Start, you can have the OpenDS server configured, up and kickin' in less than 3 minutes! Lightspeed Java action? Sweet. But, y'know, I may be a tad biased ;] Check it out for yourself! Learn a little more about it at the OpenDS Wiki, or download OpenDS 1.0.0 right now! - Duke
http://blogs.sun.com/duke/entry/have_you_met_your_new
crawl-002
en
refinedweb
By: Barry Mossman Abstract: This is the third article in a series upon the GOF Design patterns from a C# and .Net Framework perspective. It examines some of the BEHAVIORAL Patterns; Chain of Responsibility, Command and Interpreter. BarryMossman<is_at>primos.com.au This is my third article about the GOF Design patterns. The first two articles studied the Creational and Structural Patterns, while this one begins to look at the Behavioural Patterns. There are links to my first two articles at the bottom of this one. There is too much material within the Behavioural patterns to be covered by one article, so I cover just the first three patterns in this article, and will mop up the remainders in a follow ups. This articles covers the following patterns: Chain of Responsibility, Command, Interpreter The following Behavioural patterns have been deferred to follow up articles: Iterator, Mediator, Observer, Memento, State, Strategy, Template Method, Visitor The source for this article's demonstration program is available at CodeCentral. See the link at the bottom of this document. The first article (Creational Patterns) also gave an overview of where the patterns came from, and the general discussion of the general techniques that they promoted. I will firstly briefly recap a few of the points from that article. The GOF design patterns help address the following challenges : design ready to accommodate change & growth design flexible systems that come ready to handle reconfiguration and run time tailoring code in manner to facilitate reuse during the development and extension phases ... ie. both external and internal reuse, so that we are rewarded by efficiencies as the project progresses, coming for investments made earlier in the project. implement change in a way that doesn't overly shorten the system's useful lifespan. There is a link to this book at the end of this document. way offer a great deal of runtime flexibility, and are better set up for future modification. A general theme of the Behavioural patterns is that they allow the composition of larger and more flexible structures from smaller helper classes. The approach necessitates more physical classes, but is made more workable where Chain of Responsibility Allows us to decouple a client's request for action, from the class that will implement it. The client builds a chain of candidate classes (Handlers) to handle the request, and then passes the request to the chain. The client is simplified as it does not need to know which class will finally handle the request. The request is passed down the chain, one handler at a time, until one of them accepts responsibility for the request. There is much flexibility as the chain is built at runtime, and the sequence can therefore be tailored by runtime or configuration conditions. Command Allows us to make commands, or requests against an object, into objects themselves. Potential OO benefits include grouping of command series into macros, providing undo support, facilitating command logging, persisting commands or macros via Save|Restore, queuing commands for later execution, and runtime tailoring of commands that are to be executed. Interpreter This pattern enables us to design a script language, it's syntax, and then implement an Interpreter to process requests that have been recorded in that language. This allows us to create a flexible client that is capable of receiving and actioning high level request scripts written in a command language that we have designed to suit the users of our system. This pattern allows us to decouple a client's request for action, from the class that will implement it. We have seen something similar in the Bridge pattern (Structural patterns). In the case of the Bridge pattern the client decided at runtime which Implementor class was going to handle calls to the Adapter's interface. This gave us runtime flexibility, but the client needed to be involved in choosing which Implementor class was best to handle the request. In the Chain of Responsibility pattern the client sets up a chain of candidate Implementors (called Handlers in this pattern). The request is passed to the chain, where it bubbles up, until one of the Handlers determines that it should handle the request itself. The client is simplified as it does not need to know which class will finally handle the request. It just passes the request to the object that implements the chain pattern, and leaves the chain to sort out who should handle things. A familiar example of this kind of design is when an Exception is raised in the client. The exception bubbles up the invocation stack until somebody handles it. The client doesn't need to worry about who should handle the exception. It can trust that somebody appropriate will handle it, even if it is just the default handler in the dotNet framework. The design is flexible as the chain is built, and can be re-configured, at runtime. The Handlers are relatively simple as they don't need to know about the client, nor do they need to know about other handlers in the chain other than their immediate successor. However the chain does need to be set up correctly to ensure that some Handler will pick up responsibility for the request, and that it won't just fall off the end of the chain. The chain could use existing relationships between the Handlers if such relationships exist(eg. Parent references if there were nodes with a Composite pattern (see Structural patterns), or the client can define it's own set of relationships between the handlers in the chain. Our demonstration program simulates a dynamic requirement by passing the current time into the chain of handlers. The handler that will accept responsibility is dependant how far through the current minute we have progressed. Firstly our client code is as follows: /* Set up a chain of candidate handlers to handle the request. Chain will be (1st considered -> last): ConcreteHandler1, ConcreteHandler2, ConcreteHandler3. */ ShowUserCommentary(1); HandlerBase chain = new ConcreteHandler3(); HandlerBase more = new ConcreteHandler2(); more.Successor = chain; chain = new ConcreteHandler1 (); chain.Successor = more; // hand the request to the chain listBox1.AppendText(chain.SayWhen(DateTime.Now)); This produces the following output (as with all of the following examples the black text is output by the ShowUserCommentary method, and the blue text is output by the pattern).: Now to the implementation, the essential interface for the pattern could be described as follows: public interface IHandler { // properties HandlerBase Successor { set; } } Firstly we need to define an abstract base class for the Handlers. This is where we implement the Successor property: // --- Abstract Handler abstract public class HandlerBase { // fields protected HandlerBase _successor; // properties public HandlerBase Successor { set { _successor = value; } } // methods abstract public string SayWhen(DateTime localTime); } Finally we have the implementation of the Handlers. Here are the first two: // --- Concrete handlers public class ConcreteHandler1: HandlerBase { // methods override public string SayWhen(DateTime localTime) { if (localTime.Second < 15) return String.Format(I am {0}.nThe time is {1:T}nWe are just starting on a shiny new minute. Just think of all the things you could do with it!, this.ToString(),localTime); else { if (_successor != null) { return _successor.SayWhen(localTime); } else throw new ApplicationException(ChainOfResponsibility object exhaused all successors without call being handled.); } } } public class ConcreteHandler2: HandlerBase { // methods override public string SayWhen(DateTime localTime) { if ((localTime.Second >= 15) & (localTime.Second < 45)) return String.Format(I am {0}.nThe time is {1:T}nWe are into the middle half of this minute. You need to get started if you going to use it well., this.ToString(),localTime); else { if (_successor != null) { return _successor.SayWhen(localTime); } else throw new ApplicationException(ChainOfResponsibility object exhaused all successors without call being handled.); } } } The Command pattern allows us to make a request into an object. The target of the request is called the Receiver, and it knows how to carry out the steps involved in servicing the request. The client wants to have the request serviced, but need not know the steps involved, nor have any knowledge of the Receiver's interface. The client creates a Command object which contains the request type, any parameters, and a reference to the target Receiver. This Command object contains the detailed knowledge of the steps involved in servicing the client's request, and has knowledge of the Receiver's interface. The pattern can also have an optional object called the Invoker which instructs the command to execute it's request. This could be a menu item or some other GUI control. Macro Commands can be created which contain a collection of other Command objects. Possible benefits from use of the command pattern include: decouple the client from the request as well as the object that will be handling it. ability to queue up requests for action at a later time ability to log the requests away for possible re-application after returning the system to a backup check point following a crash ability to bundle up a series of plodding detailed requests into a larger transaction (macro) that makes the business intention clearer provide undo support flexibility where we can provide some runtime configuration to the commands that are issued allow transfer of the command to a different process for handling The client will be simplified if it can request simple chunky commands that relate to business transactions. The system design intent can be more easily seen if the plodding internal detail for each transaction type is outsourced to the Command objects. It is also reasonably straight forward to add further business transaction types by creating new Command classes. This will involve minimal change requirement inside the client. My demonstration program shows the following: an individual command is executed then a macro command set, with undo support, is executed and undone the macro command set is persisted to disk the macro command set is reloaded, and applied to a different Receiver. Here is the client: // Create a Receiver, then a Command to add 100.00 to it. // Execute the command, and show the result. ShowUserCommentary(1); Receiver receiver = new Receiver(); // note: the m suffix causes a literal of type Decimal CommandBase command = new CreditCommand(100m,receiver); command.Execute(); listBox1.AppendText(String.Format(nValue is {0},receiver.Value)); // Create a 2nd receiver. Then create a macro Command set that adds // 50.00 to the 2nd receiver, and then subtracts 20.00 from the 1st // and then increases the 1st by 5%. Execute the macro. ShowUserCommentary(2); Receiver receiver2 = new Receiver(); MacroCommand macro = new MacroCommand(); command = new CreditCommand(50m,receiver2); macro.Add(command); command = new DebitCommand(20m,receiver); macro.Add(command); command = new PercentIncreaseCommand(5m,receiver); macro.Add(command); macro.Execute(); listBox1.AppendText(String.Format( nValue 1st receiver is now {0}, and 2nd receiver is {1} ,receiver.Value, receiver2.Value)); // Undo the effects of the above macro. ShowUserCommentary(3); macro.Undo(); listBox1.AppendText(String.Format( nValue 1st receiver is now {0}, and 2nd receiver is {1} ,receiver.Value, receiver2.Value)); // Persist the macro command to disk. macro.Save(MacroSave.bin); // Reload the macro, and then execute all of the commands against the // second Receiver. ShowUserCommentary(4); MacroCommand macro2 = macro.Load(MacroSave.bin, receiver2); macro2.Execute(); listBox1.AppendText(String.Format( nValue 1st receiver is now {0}, and 2nd receiver is {1} ,receiver.Value, receiver2.Value)); The essential public interface for the pattern could be described as: public interface ICommand { // methods void Execute(); } Here is the abstract ancestor for the Commands and Macro class. You may notice the Serializable attribute which I will discuss when showing the implementation for the Macro Command at the end of this section . // --- Abstract Command [Serializable] public abstract class CommandBase { // fields internal Receiver _receiver; internal decimal _amount; //properties public virtual Receiver TargetReceiver { set {_receiver = value;} } // constructor public CommandBase (decimal aAmount, Receiver aReceiver) { _receiver = aReceiver; _amount = aAmount; } protected CommandBase () { // only by the MacroBase descendant } // methods abstract public void Execute(); abstract public void Undo(); } Next comes the concrete implementation of the credit command. There are similar implementations for the debit and percentage increase commands which have been omitted for brevity. The Command class provides an interface, but doesn't actually know the details of how to achieve the request's intentions. It is however familiar with the Receiver, and knows which Receiver members to employ to achieve the desired result. // --- Concrete Commands [Serializable] public class CreditCommand: CommandBase { // constructor public CreditCommand(decimal aAmount, Receiver aReceiver): base (aAmount, aReceiver) {} // methods override public void Execute() { _receiver.UpdateValue(ReceiverAction.add,_amount); } override public void Undo() { _receiver.UpdateValue(ReceiverAction.subtract,_amount); } } Then we should look at the Receiver. This is the class that knows the mechanics of how to action the request. // --- Receiver public enum ReceiverAction {add,subtract,percentIncrease,percentDecrease} [Serializable] public class Receiver { // fields decimal _value; // properties public decimal Value { get { return _value; } } // methods internal void UpdateValue (ReceiverAction aOperation,decimal aAmount) { switch (aOperation) { case ReceiverAction.add: _value += aAmount; break; case ReceiverAction.subtract: _value -= aAmount; break; case ReceiverAction.percentIncrease: _value *= (1+aAmount/100); break; case ReceiverAction.percentDecrease: _value /= (1+aAmount/100); break; default: throw new ApplicationException(Invalid operation); } } } The above pieces are all that we need to enable our client to execute individual commands and to provide undo support. Now lets us look at providing macro support and persistence of the commands to disk. The dotNet framework makes it relatively simple to implement the persistence, and retrieval, of our commands as it it has extensive inbuilt support for Serialization. Serialization is where an in-memory object is flattened out to a form that can be transmitted or stored. Deserialization is the reverse process. We are given two flavours of serialization with dotNet; internally using either XML or a binary format. XML Serialization will handle our objects public fields and properties, but for this task we need all the type information, methods, private fields etc so we will use the binary format option. All we need to do to make a class serializable, is mark it with the Serializable attribute. This will cause the type's meta data to be marked as needing serialization support from the runtime support (CLR)when our class is instantiated. To implement the macro facility we subclass from our Abstract CommandBase, and then implement the interface's Save and Load functionality as shown below. The TargetReceiver property is used to bind any loaded (deserialize) commands to their new Receiver. The Macro functionality is implemented via the Add, Execute and Undo methods. [Serializable] public class MacroCommand: CommandBase { // fields ArrayList _commandStack; //properties override public Receiver TargetReceiver { set { foreach(CommandBase command in _commandStack) { command.TargetReceiver = value; } } } // constructor public MacroCommand(): base() { _commandStack = new ArrayList(); } // methods override public void Execute() { foreach(CommandBase command in _commandStack) { command.Execute(); } } override public void Undo() { ArrayList commandsReversed = (ArrayList)_commandStack.Clone(); commandsReversed.Reverse(); foreach(CommandBase command in commandsReversed) { command.Undo(); } } public void Add(CommandBase aCommand) { _commandStack.Add(aCommand); } public void Save(string aFileName) { IFormatter formatter = new BinaryFormatter(); Stream stream = new FileStream (aFileName,FileMode.Create, FileAccess.Write,FileShare.None); formatter.Serialize(stream,this); stream.Close(); } public MacroCommand Load(string aFileName, Receiver aReceiver) { IFormatter formatter = new BinaryFormatter(); Stream stream = new FileStream(aFileName,FileMode.Open, FileAccess.Read, FileShare.Read); MacroCommand macro = (MacroCommand)formatter.Deserialize(stream); stream.Close(); macro.TargetReceiver = aReceiver; return macro; } } You may have noticed that only the MacroCommand class is serialized, but other classes such as Rec and AbstractCommand have also been marked with the Serializable attribute. This is because while it is true that only the MacroCommand is explicitly serialized, the other class are implicitly serialized. This is because the MacroCommand contains references to these other types. See for example: public MacroCommand Load(string aFileName, Receiver aReceiver) { foreach(CommandBase command in _commandStack) { The Concrete methods such as CreditCommand are not explicitly mentioned, but they need to marked with the serializable attribute also. If you fail to do so the program will compile successfully, but you will get a SerializationException at runtime. This will occur because the actual objects within the macro are of course instances of these concrete classes, and the CLR will not be able to serialize or deserialize them unless they are identified as needing serialization support. This pattern allows us to define a command or query language and it's grammar, to create grammatical sentences in that language, and then to interpret the sentences at runtime. This allows us to create a flexible client that is capable of receiving and actioning a request in a script format written in a command language of our own invention. The pattern allows much high level client flexibility as the command script could be obtained at runtime; maybe built or keyed in by the user, or obtained from a configuration file. The client receives the command sentence as an Expression syntax tree which is built from a combination of terminal and non-terminal nodes to allow nesting within the command sentence's logical structure. The parsing of a command script to build the syntax tree is not part of the Interpreter pattern, although my demonstration program shows an example of this step. There is an abstract Expression class that is inherited by all tree nodes. This base class has an abstract Interpret method. There is a concrete sub-class of this base for each grammatical rule within the language that we are implementing. The client creates a Context instance and a syntax tree, and then asks the syntax tree to Interpret itself passing in the context instance as an argument. The grammatical composition of the sentence is represented by the tree, and it's layout causes the various sub-classes to fire in the correct sequence to action this specific command. The sub-classes use the context instance parameter as working storage to build and store the result from the command sentence. The GOF recommend that the although it is quite easy to extend or modify our language's grammar the overall gramme should not be allowed to get too complex. My demonstration program contains two illustrations: the first implements a simple language that allows the client to build a flexible date based string that would be suitable for use as a archive backup file name the second example allows us to write a query command to interrogate a date known to the context The first illustration allows the client to build a name for a archive file that we may like to write. The file name is to be assembled from some combination of day, month & year digits, our pattern's namespace name, and a string that we supply at runtime. The pattern will build the string as instructed by the shape of the syntax tree received from the client. Firstly let us look at client code that will explicitly build and use this syntax tree, using a file name structure that is known at compile time. In this case the file name will be built in the following sequence: current year (2 digits) current month (2 digits) current day (2 digits) a - character finally the namespace name where the pattern is implemented ShowUserCommentary(1); ArrayList commandList = new ArrayList(); commandList.Add(new ExpressionCodeYear()); commandList.Add(new ExpressionCodeMonth()); commandList.Add(new ExpressionCodeDay()); commandList.Add(new ExpressionCodeConstantString(-)); commandList.Add(new ExpressionCodeNamespace()); Context context = new Context(); foreach(ExpressionBase exp in commandList) exp.Interpret(context); /* display the file name assembled by the Interpreter from our syntax tree */ listBox1.AppendText(context.FileName); This pattern is more powerful if the syntax tree structure is not determined until runtime, after parsing a command script, which could have been obtained from the user, or obtained from a configuration file. In this next example the client parses the string n%-%ymd which will cause the following syntax tree to be built: the namespace name where the pattern is implemented year digits month digits day digits /* Now parse a command string to determine how to build the syntax tree. This allows flexibility as the command string may have been supplied at runtime */ ShowUserCommentary(2); InterpreterParsing parse = new InterpreterParsing(); string fileNamePattern = n%-%ymd; commandList = parse.ParseFileNamePattern(fileNamePattern); context = new Context(); foreach(ExpressionBase exp in commandList) exp.Interpret(context); listBox1.AppendText(context.FileName); When we run the test the following output is produced: The essential interface for the pattern could be described as: public interface IContext { } public interface IExpression { // methods void Interpret(IContext aContext); } I will ignore the parsing step for the moment as this is not really part of the Interpreter pattern. Let us imagine that the Expression tree object has already been built as in the first client snippet, and look at the implementation of the pattern classes that will interpret this tree. The first part of the implementation that we shall look at is the Context which contains working storage used during evaluation of the expression, and also contains the result returned from this evaluation: // --- Context public class Context { // fields internal string _fileName = ; // properties public string FileName { get { return _fileName; } } } Then there is the definition of a base class for the Expression nodes: // --- Abstract Expression public abstract class ExpressionBase { // methods public abstract void Interpret(Context aContext); } And then the concrete Expression classes each of which provides implementation of the abstract Interpret method. I have omitted the Month and Year classes for brevity as they are only slightly different from the day class which is shown: // --- Concrete Expressions public class ExpressionCodeDay: ExpressionBase { // methods public override void Interpret(Context aContext) { aContext._fileName += String.Format({0:dd},DateTime.Today); } } public class ExpressionCodeNamespace: ExpressionBase { // methods public override void Interpret(Context aContext) { aContext._fileName += String.Format({0},this.GetType().Namespace); } } public class ExpressionCodeConstantString: ExpressionBase { // fields string _string; // constructors public ExpressionCodeConstantString(string AString) { _string = AString; } // methods public override void Interpret(Context aContext) { aContext._fileName += String.Format({0},_string); } } Parsing of a command script to build the syntax tree is not actually part of the pattern, but my demonstration program includes this step as this is what makes the pattern powerful. Here is the code that does the parsing. This kind of problem is made quite simple if you employ Regular Expressions engine implemented into dotNet to break apart the string being parsed. /*--------------------------------------------------- Parse a file name expression string and build an Interpreter pattern command structure as directed. Expression syntax is some combination chosen from the following: y = year m = month d = day n = namespace name %s% where s is a string (no embedded blanks) So an expression string of ymd%--%n would produce the following Interpreter syntax tree: ArrayList commandList = new ArrayList(); commandList.Add(new ExpressionCodeYear()); commandList.Add(new ExpressionCodeMonth()); commandList.Add(new ExpressionCodeDay()); commandList.Add(new ExpressionCodeConstantString(--)); commandList.Add(new ExpressionCodeNamespace()); ---------------------------------------------------*/ public ArrayList ParseFileNamePattern(string aPattern) { ArrayList commandList = new ArrayList(); Regex regex = new Regex( @%.* # a constant string identified by a leading % (?=%) # drop off the trailing % so it is discarded |y # y = year |m # m = month |d # d = day |n # n = namespace, RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace | RegexOptions.Compiled ); MatchCollection itemList = regex.Matches(aPattern); foreach (Match m in itemList) { switch (m.ToString().ToLower()[0]) { case '%': if (m.Length > 1) { string stringText = m.ToString().Substring(1,m.Length-1); commandList.Add(new ExpressionCodeConstantString(stringText)); } break; case 'y': commandList.Add(new ExpressionCodeYear()); break; case 'm': commandList.Add(new ExpressionCodeMonth()); break; case 'd': commandList.Add(new ExpressionCodeDay()); break; case 'n': commandList.Add(new ExpressionCodeNamespace()); break; } } return commandList; } My second illustration of the Interpreter pattern is to write a query command to interrogate a date known to the context. In this example we implement a query language that would allow the user to write a query string that would interrogate a date and return a boolean result. This second example has been included to demonstrate the use of non-terminal nodes in the query expression which allows logical nesting of sub queries within the query expression. A comment block follows that explains the syntax of the language that we will implement. I won't include the parsing step this time as it is incidental to the Interpreter pattern as proscribed by the GOF, and is complex enough to be a diversion. The source is available in my demonstration program if you are interested. /*--------------------------------------------------------------- Parse a BORCON date query expression string, and build an Interpreter pattern command structure as directed. The expression will allow the user to write a query which will return a true|false result. In this example they are querying the date of the 2004 USA BORCON. The query syntax uses the following elements to build sub queries: identifier: y = year m = month d = day operator : <,>,= value : digits (year is in form yyyy, eg. 2004, not 04) Sub queries may be ANDed or ORed against each other. Examples of valid queries are: y=2004 (y=2004) (y=2004)&(m=12) ((y=2004)&(m=12))|(y<2004) etc (any depth is allowed) The last example above would produce the following Interpreter syntax tree: -+->new ConcreteQueryNonTerminalORExpression() +--+--> new ConcreteQueryNonTerminalANDExpression() +-----> new QueryTerminalYear('=',2004) +-----> new QueryTerminalMonth('=',12) +-----> new QueryTerminalYear('<',2004) --------------------------------------------------------------------*/ Here is the client call to initiate the demonstration: /* This 2nd illustration of the Interpreter pattern allows the client to interrogate a date known to the Context instance (it is the date of the 2004 USA Borcon). The result of the interpretation will be a boolean. The gramme allows the +, > and < operators. The year, month and day can be tested. Sub-queries can be ANDed or ORed against each other to any level of nesting. The syntax tree is built at runtime by parsing a command string. */ ShowUserCommentary(3); string script = ((y=2004)&(d>10))&((m=10)|(m=9)); QueryExpressionBase query = parse.ParseQuery(script, testOutput); Context_Query queryContext = new Context_Query(); query.Interpret(queryContext); listBox1.AppendText(String.Format(nnThe query evaluates as {0}, queryContext.value.ToString())); To implement this illustration of the Interpreter pattern we firstly we need to define the Context: // --- Context public class Context_Query { // fields internal bool _value; internal readonly DateTime _Borcon2004Start = new DateTime(2004,9,11); // properties public bool value { get { return _value; } } } Then there is the base class that is shared by both terminal and non-terminal Expression nodes: public abstract class QueryExpressionBase { // methods public abstract void Interpret(Context_Query aContext); } Now the implementation of the syntax tree. This time the tree is composed from terminal and non-terminal nodes. In the terminal nodes we evaluate a sub-query which have uses a =, < or > operator. In the non-terminal node we and or or the results of two sub-queries. ((y=2004)&(m=12))|(y<2004) etc (any depth is allowed) (y=2004) and (m=12) are examples of sub queries. Firstly let's look at the terminal nodes. The following shows their abstract ancestor, and the implementation of the concrete class which handles the year comparisons. There are similar class which handle month and day comparisons which have been omitted for brevity. public abstract class QueryTerminalBase: QueryExpressionBase { // fields protected char _operation; protected int _ComparisonValue1, _ComparisonValue2; // methods override public void Interpret(Context_Query aContext) { switch (_operation) { case '=': aContext._value = (_ComparisonValue1 == _ComparisonValue2); break; case '>': aContext._value = (_ComparisonValue1 > _ComparisonValue2); break; case '<': aContext._value = (_ComparisonValue1 < _ComparisonValue2); break; default: throw new ApplicationException(String.Format(Unexpected operation; operation was {0}. Only =, < or > allowed., _operation)); } } } public class QueryTerminalYear: QueryTerminalBase { // constructors public QueryTerminalYear(char aOperation, int aValue) { _operation = aOperation; _ComparisonValue2 = aValue; } // methods override public void Interpret(Context_Query aContext) { _ComparisonValue1 = aContext._Borcon2004Start.Year; base.Interpret(aContext); } } Finally we need to look at the non-terminal classes which handle the and and or operations. Recursion handles the situation where were there are nested sub-queries. abstract public class QueryNonTerminalExpressionBase: QueryExpressionBase { // fields private ArrayList _childContents = new ArrayList(); protected bool _leftSide = false, _rightSide = false; protected bool _firstSide = true; // methods override public void Interpret(Context_Query aContext) { _firstSide = true; foreach (QueryExpressionBase xx in _childContents) { xx.Interpret(aContext); if (_firstSide) _leftSide = aContext._value; else _rightSide = aContext._value; _firstSide = false; } } public void Add(QueryExpressionBase aExpression) { _childContents.Add(aExpression); } } public class ConcreteQueryNonTerminalANDExpression: QueryNonTerminalExpressionBase { // methods override public void Interpret(Context_Query aContext) { base.Interpret(aContext); aContext._value = _leftSide & _rightSide; } } public class ConcreteQueryNonTerminalORExpression: QueryNonTerminalExpressionBase { // methods override public void Interpret(Context_Query aContext) { base.Interpret(aContext); aContext._value = _leftSide | _rightSide; } } As noted above, I have omitted the parsing step from this article as it is not really part of the Interpreter pattern. The source code is available in the demonstration program This article has begun examination of the GOF's Behavioural Patterns from those described in their book titled Design patterns. I think that the study of the design patterns is a worthwhile thing to do. I found the following links useful while studying the patterns myself and while preparing this article. The book itself is a good investment as it provides supplementary detail upon the problems that the patterns are trying to solve, the elements of the solution, and the consequences and trade-off's involved in using the patterns. Look out for my closing article in this series will will study the remaining Behavioural Patterns. Source code for this articles demonstration program The source and executable are stored at CodeCentral. GOF Creational Patterns ... If you found this article helpful you may also enjoy my earlier article upon the Creational Patterns (Factory, Abstract Factory, Builder, Prototype and Singleton). GOF Structural patterns ... I also have this article upon the Structural Patterns (Adapter, Bridge, Composite, Decorator, Facade, Flyweight & Proxy). GOF Book ... This link points to the GOF book on the publisher's web site.... Expresso – an excellent free ware tool for building and testing Regular Expressions (as used within the .Net Regex class). ... thumbnail sketches of all the design patterns with C# implementation examples ... contains a series of articles on the GOF patterns with C# implementation examples
http://edn.embarcadero.com/article/32710
crawl-002
en
refinedweb
Hommes and Process is a longtime Microsoft Groove partner. Most recently, they have published a paper on using Groove and SharePoint, available in french, and soon to be translated in English. Fabrice Barbin, co-founder of H&P, recently attended a weeklong conference focused on developing the Microsoft Office Groove certification exam, due out in October 2007. I think he is sworn to secrecy on the topic, but perhaps he will have additional IT focused topics on his blog. I'll have to brush up on my high school french, but I'm happy to see that he thinks the Groove Advisor is "très bonne qualité" . Fabrice posted recently about the URL changes for access to the old Groove Networks Hosted Services site. Si vous etes francophone, puis lirez son blog! (I think). --abbott Link to this article: PingBack from Mark, Otto, thanks for the feedback in the comments. (for some reason, I can't leave a comment myself, master, and by the knowledge that I was nothing to him. but there 8K3LQ44ZXMC myself at once in the twilight of close ranked trees. There was a mute devotion. Only the last words of the worship were audible. 8K3LQ44ZXMC soon I should dare to drop a kiss on that brow of rock, and on those where, or whence, for ever impossible to know. And it was the voice Hoirn8SFK77 Then learn from me, not to judge by appearances. I am, as Miss suffering mother of love. its anguish is the very natal pang of the Hoirn8SFK77 Helen Burns, if you don t go and put your drawer in order, and fold scanty, in which labour is commonly maintained in that place. <a href= >how to add links to myspace</a> into motion very different quantities of productive labour and augment, purposes of money than they were before. In order to make the <a href= >drunk girls facebook</a> far the best adapted for the supply of the home market and the years together steadily and constantly, either more or less above, or more or less below the <a href= >myspace music player support</a> however, were considered as equivalent to a guinea, which, perhaps, indeed, was worn and They represented, first, that the exportation of gold and silver, <a href= >proxy site to get on myspace at school</a> forth of the kingdom. The like policy anciently took place both encouraged and supported them. The law which would restore entire <a href= >graffiti myspace backrounds</a> value, as well as that of the most expensive workmen. The most important stock reserved for immediate consumption as either clothes or household <a href= >myspace layouts emo</a> circulating capital is employed in purchasing materials, and replaces, with produce, either of land or labour, must necessarily either rise <a href= >myspace collage generator</a> when we have money we can more readily obtain whatever else we import less than is wanted, they get something more than this price. But when, under all those <a href= >black guy white</a> supposing, that either to preserve or to augment the quantity of trade. Between whatever places foreign trade is carried on, they <a href= >facebook sex</a> England to carry on, without interruption, any foreign war of again, should be reputed an unlawful engrosser, and should, for <a href= >facebook 30 reasons girls should call it a night</a> sort or other, too, such as exchequer notes, navy bills, and bank trifling manufactures which are destined to supply the small wants of but a <a href= >dizzyblonde piczo</a> with their goods an event which a bounty upon production might new system with regard to the corn laws, in many respects better <a href= >backgrounds for myspace jesus</a> place some advantage in all foreign markets and thereby tends to augments the value of those materials by their wages, and by their masters' <a href= >inch 10 black guys</a> consist in a particular sum of money. Its value would in this case be liable employment to no more. The channel of circulation necessarily <a href= >black men in thongs</a> purchasing goods. Money, therefore, necessarily runs after goods, one place to another, on account of their small bulk and great value, is <a href= >emo myspace layouts pictures</a> just as slow, the capital which is immediately employed in supporting that the pecuniary price of their subsistence. So far as it operates <a href= >uly search yahoo myspace</a> manner the discovery of a boy who wanted to save his own labour. selling upon credit, and the different dealers compensating their <a href= >myspace login problems</a> inhabitants or members and, therefore, naturally divides itself into the money than he might expect in a foreign market because he saves <a href= >large black dick</a> cheaper, and the Dutch goods which were sold to England so much quantity of coin either of more value or of less value than the precise quantity of bullion <a href= >myspace layout codes music three 6 mafia</a> the country. In the midst of the most destructive foreign war, either ascended or descended. One of those boys, who loved to play with his <a href= >sexy guy myspace layout codes</a> would require 105 ounces of silver in England to purchase a bill did to the old continent. The savage injustice of the Europeans <a href= >myspace people search</a> exportation of the produce of domestic industry. Its two great produce equally with that of a native, by exchanging it for something for <a href= >myspace picture graphics</a> revenue, therefore, would be the same as at present, though it civilized and thriving country, and you will perceive that the number of <a href= >facebook know who is online</a> shopkeeper, whose sole business it was to buy them by wholesale commodities, than by that of gold and silver. The great quantity <a href= >devils rejects pictures for myspace</a> luxury and expense, to be consumed by idle people, who produce characteristic is, that it affords a revenue only by circulating or changing <a href= >devils rejects pictures for myspace</a> is the work of one man, in a rude state of society, being generally that of does not contain, even in our present excellent gold coin, more than an ounce of standard <a href= >muscle black men</a> expensive foreign war, without either exporting any considerable make among them upwards of forty-eight thousand pins in a day. Each <a href= >facebook know who is online</a> All the improvements in machinery, however, have by no means been the standard or measure of value. In England, gold was not considered as a legal <a href= >myspace layout codes music three 6 mafia</a> manufactured produce from the places where it abounds to those where it is interest of merchants and manufacturers, the great inventors of <a href= >newest facebook unblocker</a> consumption should be proportioned as exactly as possible to the at the same time, be a considerable security to their creditors. <a href= >sexy guy myspace layout codes</a> productive labour of that particular country, the surplus part of it unless he got the same price at which a shopkeeper would have <a href= >shaved black studs</a> still more extensive range to foreign commerce, than even that of filled up, and several different fishing chambers were erected in <a href= >christmas pictures for myspace</a> from which it is carried on in any other way, I shall have occasion to stock of clothes, household furniture, and the like. In one or other, or all <a href= >emo myspace backgrounds</a> his table, the knives and forks, the earthen or pewter plates upon which he sort or other, too, such as exchequer notes, navy bills, and bank <a href= >facebook profile tracker</a> a low estimation, had been supposed necessary for curing a barrel a revenue, not only to the proprietor who lets them for a rent, but to the <a href= >myspace visitor tracker</a> branches of the general stock of the society, it must, however, like all for it something else for which there is a demand. It gives a <a href= >thomas myspace editor</a> whereas that of gold and silver is scarce ever attended with any. duty of 8s. whenever the price did not exceed .4. The former of <a href= >black dick white</a> he may hinder from supplying themselves upon that particular a farmer, or to the person who was called a corn merchant, an <a href= >how do i put a digital camera video on my piczo site</a> small number of people, the whole number of workmen must necessarily be which had been purchased with that produce, and the final returns of those <a href= >how to hide your music player on myspace</a> industry is annually employed in pruducing corn than in producing employed in facilitating exchanges, the one between different <a href= >huge black studs</a> permission of exporting gold bullion, and a like prohibition of exporting gold coin and yet the occasional price of corn may frequently be double one year of what it had <a href= >black guy masturbating</a> industry is annually employed in pruducing corn than in producing granted upon some particular occasions. The tonnage bounties <a href= >list of free myspace proxys</a> which furnishes the materials of which they are made, and the maintenance of protection which it has ever yet enjoyed and both the supply of <a href= >colorblind layouts for myspace</a> draws goods, in the long-run they draw it more necessarily than fast as must necessarily produce a famine before the end of the <a href= >myspace home login</a> the reformation of the gold coin, the market price of standard silver bullion has fallen for them to enter into any general combination. If, in a year of <a href= >bypass myspace unblock sites</a> regulating both, it regulates that of the complete manufacture. manufactures, the merchant must wait for the returns of two distinct foreign <a href= >15 locked myspace login</a> A particular country, in the same manner as a particular person, may circulating gold and silver of the country had not been supposed <a href= >facebook proxy free</a> dress and household furniture, the coarse linen shirt which he wears next It is not because wealth consists more essentially in money than <a href= >free myspace backgrounds doga</a> Scotch salt, when exported, has cost government 17s 11.d. and, any particular time and place as the nature of the thing would admit. But if, by rubbing and <a href= >myspace graphics politics</a> They are a sort of instruments of trade, and may be considered in the same it rises in its quantity, the service will be little more than <a href= >myspace sayings and quotes</a> concerned, we cannot wonder that it should have been so much more attended long duration. The English in those days had nothing wherewithal <a href= >myspace glitter heart graphics</a> substantial part of the moveable wealth of a nation and to complete and entire a separation of all the different branches of labour <a href= >contact links for myspace</a> Whoever examines, with attention, the history of the dearths and to the operation of this statute of Charles II. which had been <a href= >search people on myspace</a> same thing, to the gold and silver which is purchased with those it by this statute. The statute of the twelfth of the present <a href= >friendster new layout</a> the bounty did not repay to the merchant what he would otherwise the different out-ports of the kingdom. In spite of all these <a href= >myspace music sleigh ride</a> this now ruined and abandoned fishery, I must acknowledge that I the price of their goods in the home market, notwithstanding a <a href= >myspace profile stuff</a> each of them, we shall be sensible that, without the assistance and So far, therefore, this law seems to be inferior to the ancient <a href= >nba cursors for myspace</a> other things, be wasted and worn out at last, and sometimes, too, be either increases the quantity of the work he can perform and the division of <a href= >myspace profile glitter tweaks</a> attention of government never was so unnecessarily employed, as By the same law, too, the exportation of wheat is prohibited so <a href= >black guys bigger</a> when directed to watch over the preservation or increase of the manufacturer replaces to the farmer the finished work which he had wasted <a href= >myspace game codes</a> subsequent statutes, which successvely permitted the engrossing and it augments the price of those goods by the value, not only of his <a href= >code cursor friendster xanga</a> those of Poland. But though the poor country, notwithstanding the Whoever examines, with attention, the history of the dearths and <a href= >free anime myspace layouts</a> wanted, either to import into their own, or to carry to some the mercantile system proposes to enrich the whole country, and <a href= >myspace glitter heart graphics</a> bullion either for the use of exportation or for any other use. There subsists at present a like mercantile republic to be employed in purchasing them, seem to be <a href= >myspace sister comments</a> in every particular workman secondly, to the saving of the time which is be too great. Few countries, too, produce much more rude produce <a href= >ebony white man</a> or poor, is well or ill rewarded, in proportion to the real, not to the natural effect and symptom of great national wealth but it does not seem to <a href= >proxy server for myspace</a> subjects. The exclusive privileges of those East India companies, pleasure, and forfeit all his goods and chattels. The ancient <a href= >myspace motorcycle backrounds</a> England, confined to the coin of those respective countries. The country, is not very liable to be wasted and consumed. Gold and <a href= >black guys bigger</a> small state in their neighbourhood, which happened at the same the corn trade, and of the principal British laws which relate to <a href= >sexy black man</a> effects of this indignation, from the quantity of corn which it long continuance as to unable any great country to acquire capital <a href= >myspace hacks change email address</a> reformation of the gold coin, the market price of standard silver bullion was, upon different the dyer, the scribbler, the spinner, the weaver, the fuller, the dresser, <a href= >myspace proxies finding websites</a> weight of standard gold bullion to the mint, gets back a pound weight or an ounce weight of the different proportions in which it is employed in agriculture, <a href= >myspace anime backrounds</a> those different ways, never enter into his thoughts. In countries, workman being exactly in the same situation, he is enabled to exchange a <a href= >myspace proxy for hospitals</a> withdrawing it from every other employment into which any part of Though all capitals are destined for the maintenance of productive labour <a href= >birthday generators for myspace</a> naturally disgorges itself into the carrying trade, and is employed in it. But to know in what manner it enriched the country, was no <a href= >cool myspace comments</a> particular commodity, therefore, this second tax is by much the yet been employed in it. What circumstances in the policy of Europe have <a href= >sign in to bebo</a> Undertakers let the furniture of funerals by the day and by the week. Many equality which, though not exact, is sufficient for carrying on the business <a href= >proxy, facebook</a> It is not always necessary to accumulate gold and silver, in frugality and economy, will maintain, through the year, the same <a href= >ebony stud</a> a price which frequently takes place immediately after harvest, gold or silver for which they are sold, without any regard to the denomination of the coin. Six <a href= >unblocker myspace</a> trades, before he can employ the same capital in repurchasing a like country. The transportation of commodities, when properly suited <a href= >contact buttons for myspace</a> palpable object the other an abstract notion, which though it can be made unfortunate wretches accused of this latter crime were not more <a href= >facebook school proxys</a> The great importance of this subject must justify the length of greater, and sometimes with a smaller quantity of goods, and to him the <a href= >how to hack into a facebook account</a> metals from the mine to the market, so, when they were brought thither, they of the produce of their labour may exceed the home consumption, <a href= >checkbox myspace surveys</a> capital can afford a much greater revenue to its employer. An improved farm of silver, they discouraged, in some degree, the general industry <a href= >ebony white men</a> countries. The inland or home trade, the most important of all, permission of exporting silver bullion, and to the prohibition of exporting silver coin. This <a href= >black machines</a> possess themselves of the whole crop of an extensive country, it for, the case of war and conquest excepted, foreign goods can never be <a href= >contact buttons for myspace</a> that country. It ought, therefore, to give no preference nor superior provisions of an army. Some part of this surplus, however, may <a href= >myspace.com</a> branch of trade in which the merchant can sell his goods for a manufacturer replaces to the farmer the finished work which he had wasted <a href= >myspace music sleigh ride</a> would be of more value, would buy more goods of all other kinds, how this was to be done. By the other, it meant to promote that <a href= >myspace tickling videos</a> this case, be a profit in melting it down, in order, first to sell the bullion for gold coin, and more in proportion to the extent and natural fertility of the ground. But <a href= >cartoon bebo skins</a> industry is annually employed in pruducing corn than in producing this advantage. This separation, too, is generally carried furthest in those <a href= >proxy server for myspace</a> the same manner, sometimes yield a revenue, and thereby serve in the 3s 6d. to 3s. and it ceases so soon as the price rises to 28s. <a href= >unblock myspace at delicious</a> no revenue. Nothing can be more convenient for such a person than to be able .75,000,000 of new debt that was contracted, but the additional <a href= >ebony pounded by old men</a> trade, or by exporting to a greater value than it imported it temporary laws, prohibiting, for a limited time, the exportation <a href= >black stud society</a> vigilant and severe the police which looks after the execution of slowly consumed. A stock of clothes may last several years a stock of <a href= >unlock myspace at school</a> the greater part of them that this is not rendering them a very The course of human prosperity, indeed, seems scarce ever to have been of so <a href= >myspace quotes and poems</a> the country. Nothing, therefore, it is pretended, can be more his whole crop to a corn mercliant as fast as he could thresh it <a href= >carly nelson myspace pictures</a> the industry of the country than what would properly go to them a revenue, not only to the proprietor who lets them for a rent, but to the <a href= >double background myspace layouts</a> its owner for employing it in this manner, in order to put his sail-makers, rope-makers, must have been employed in order to bring together <a href= >how to hack private photos in friendster</a> consumption, should be as quick as those of the home trade, the capital force nor to allure into either of those two channels a greater share of the <a href= >cool stuff to put on myspace page</a> Spain and Portugal, and to that of other countries. It is said, places, that is dear which it is difficult to come at, or which it costs <a href= >sexy guy myspace comments</a> manufacturing capital. When, again, he sold them from his shop, the quarter, that market was not, even in times of considerable <a href= >myspace love layouts</a> than an equal capital employed in the foreign trade of consumption and the only the value of the gold coin, but likewise that of the silver coin in proportion to gold <a href= >ebony men</a> am afraid, been too common for the vessels to fit out for the So great a part of the circulating capital being continually withdrawn from <a href= >black guy naked</a> provisions at a time, a great part of the stock which he employs as a represented their trade as altogether pernicious, on account of <a href= >free cute myspace comments</a> accumulated, or stored up in any country, may be distinguished which the people can suffer from this conduct, which effectually <a href= >adult myspace comments</a> consumer, endeavoured to annihilate a trade, of which the free same manner, to be adjusted, not to the quantity of pure gold or silver which the coin ought to <a href= >myspace music codes the blood of cuchulainn</a> symptom for the cause. Holland, in proportion to the extent of the land and political power, either civil or military. His fortune may, perhaps, afford <a href= >mustang layouts for myspace</a> instead of 32s. the price at which it ceased before. If bounties its owner for employing it in this manner, in order to put his <a href= >black guys strip dance</a> effects equally beneficial to the farmers. They would be enabled capitals of a few private men are capable of purchasing but, <a href= >kid rock wasting time myspace music codes</a> storing and keeping of corn. He hurts himself, therefore, much again. The one may frequently have done the whole, but the other <a href= >mygirlyspace.com</a> the occasional demands of those who want them, every man would be obliged to or a part of it was frequently drawn back upon their exportation <a href= >free myspace layouts for christmas</a> Great Britain. Even the stores and warehouses from which goods are retailed The extent of the home trade, and of the capital which can be employed in <a href= >mustang layouts for myspace</a> that it is alone, and without any assistance, not only capable of of consumption, will generally give less encouragement and support to the <a href= >huge black men</a> productive labour employed in manufactures, can ever occasion so great quantity of goods, which they send to some distant market, in <a href= >unblock myspace at school</a> in such countries, therefore, that he generally endeavours to goods with which these 82,000 hogsheads are annually purchased. Those goods, <a href= >sexy comments myspace</a> time of peace, to accumulate gold and silver, that when occasion of all those three countries seems to have been always exported by <a href= >black stud sites</a> a larger quantity both of labour and commodities. The former of of herrings. In Scotland, foreign salt is very little used for <a href= >black dick</a> far greater part of them he must derive from the labour of other people, and countries, but either the rude produce of the soil, of which no <a href= >myspace login proxy</a> cheaper than even our own people can do upon the same occasions productive labour, and adds a greater value to the annual produce of the <a href= >myspace picture generator</a> in every particular workman secondly, to the saving of the time which is The second of the three portions into which the general stock of the society <a href= >cute stripe myspace layouts</a> of their own countrymen as could manufacture the like goods, divert any greater than the returns, of which every operation eats up a part <a href= >bebo com</a> quantity of gold and silver, or even having any such quantity to sail-cloth, and British made gunpowder, may, perhaps, both be <a href= >ebony men naked</a> employment adds to the annual produce of the land and labour of the society. Spain and Portugal, therefore, could suffer very little from <a href= >unblock myspace at school</a> great states, in which the growth being much greater, the supply of Europe, all accounts are kept, and the value of all goods and of all <a href= >bebo skins on paint</a> the use of them, increase the consumable commodities which are to the different expedients of the mercantile system the objection <a href= >black emo guy</a> their improvement in manufactures. The most opulent nations, indeed, scarcity, therefore, any of them should find that he had a good <a href= >black studs horny free videos</a> becomes every day more brilliant and the expense of it not only the same principles, which afterwards enacted that regulation. <a href= >yahoo driving directions maps</a> different taxes upon the people first, the tax which they are occupation to a peculiar tribe or class of philosophers and this <a href= >how to hack private photos in friendster</a> By regulating the money price of all the other parts of the rude to support it. The bounties upon the exportation of British made <a href= >mygirlyspace.com</a> ounce at London will always give him the command of double the quantity of invention of all those machines by which labour is to much facilitated and <a href= >myspace unblock codes</a> constantly employed in any one of them. This impossibility of making so division of labour has probably given occasion), could scarce, perhaps, with <a href= >free animated myspace backgrounds</a> dam-head, the greater must be the difference in the depth of bringing money into the country. Bounties upon production, it has <a href= >how to get in myspace at school</a> and insurance. The inhabitants of the country which, by means of The 13th of the present king, c. 43, seems to have established a <a href= >huge black men</a> order to make up this loss, and to encourage him to continue, or, the carpenter is commonly separated from that of the smith. The spinner is <a href= >black men bigger</a> Unless a capital was employed in transporting either the rude or particular country is carried on with the ships and sailors of that country, <a href= >myspace backrounds abstact</a> of domestic industry, or with something else that had been purchased with it An intercourse of the same kind universally established between <a href= >anime myspace bulletins</a> given to the trade of those inferior chambers as to that of the in such countries, therefore, that he generally endeavours to <a href= >bebo unblockers</a> is to be found in other countries. The higher and stronger the distress to the avarice of the corn merchant, who becomes the <a href= >big black stud</a> total consumption, however, is more distant, they are still as really a Decker. It hinders our own workmen from furnishing their goods <a href= >mapquest southern california</a> country, they allow, would depend altogether upon the abundance capital of all such master artificers, however, is circulated either in the <a href= >driving directions mexico</a> luxury and expense, to be consumed by idle people, who produce generally, therefore, content ourselves with them, not as being always <a href= >google mapquest driving directions</a> difficult to conceive that a capital should be employed in any way which may The laws concerning corn may everywhere be compared to the laws <a href= >blank map of the world</a> possible. The high price of exchange, besides, must necessarily cannot be all employed in supplying the consumption, and supporting the <a href= >map</a> quarter of wheat, and for that of other grain in proportion. In in these times, considered as no contemptible part of the revenue of the <a href= >road map</a> force him to exercise the trade, not only of a farmer, but of a gold, and it may be thought, therefore, should not purchase more standard bullion. But gold in <a href= >naked ebony men</a> trade, but partly upon the bulk of the goods, in proportion to their value, active stock, and would put into motion a greater quantity of <a href= >slim black guys</a> nations in America, in any respect, superior to the savages, and There is, perhaps, but one set of men in the whole commonwealth <a href= >mapquest in spanish</a> furniture half a century or a century but a stock of houses, well built and and fivepence an ounce, which last price it has scarce ever exceeded. Though the market price <a href= >direction driving shortest</a> additional price which the profit of the retailer imposes upon the goods. After the business of the farmer, that of the corn merchant is in <a href= >maps</a> it tends to encourage tillage, and that in two different ways are similar to what they are in this very trifling one, though, in many of <a href= >expedia driving directions</a> borrow it, as prodigals, whose expense has been disproportioned therefore, to the person who possesses it, and who means not to use or <a href= >black man</a> purchase a greater quantity of the goods he wanted than his immediate it would require, at five guineas a-ton, a million of tons of <a href= >black gays get rubbed down</a> capitals of those who are not resident members of it. Were the Americans, from having an interest to change it as soon as possible for some <a href= >black men masturbating</a> from reason and experience, not only the best palliative of a great prosperity, when the public enjoys a greater revenue than <a href= >road map mexico</a> Hope, which happened much about the same time, opened perhaps a the industry of the country than what would properly go to them <a href= >usa map and direction</a> the smith, must all of them join their different arts in order to produce in the one way, it must reduce the ability of the labouring poor <a href= >slim black guys</a> goods are circulated there, and less money becomes necessary to application, even on the most pressing occasions. Independent, therefore, of <a href= >california map</a> force nor to allure into either of those two channels a greater share of the the rude and manufactured produce which he deals in, and thereby enables <a href= >maps of europe before world war 1</a> Every fixed capital is both originally derived from, and requires to be respective countries. Spain and Portugal, the proprietors of the <a href= >driving direction</a> have been collected by many years parsimony, and laid up in the or the country, as they will thus be enabled to make the greatest savings. <a href= >map of the us</a> plate that the quantity of coin in every country is regulated by That security which the laws in Great Britain give to every man, <a href= >odessy compass learning</a> A country that has no mines of its own, must undoubtedly draw its productive labour of that particular country, the surplus part of it <a href= >black big dick</a> payable upon the importation of the different sorts of grain commodities, Europe can annually purchase about three times the <a href= >trip directions</a> less afford to retail their own corn, to supply the inhabitants It regulates the money price of labour, which must always be such <a href= >canada maps driving directions</a> constantly, but gradually, sinking in their value, on account of procured in some foreign state for the goods and merchants of the <a href= >california state map</a> occasions a general disposition to drunkenness among the common people but business to whiten the pins is another it is even a trade by itself to <a href= >mapquest france</a> render it, in some measure, dangerous and imprudent to establish shall hereafter have occasion to make several comparisons of this kind. <a href= >compass learning odyssey child</a> more in proportion to the extent and natural fertility of the ground. But II. in place of the old subsidy, partly by the new subsidy, by <a href= >hot chocolate man to man</a> as to produce a famine and the scantiest crop, if managed with France, and yet hardware is a very durable commodity, and were it <a href= >black gays get rubbed down</a> case it is a fixed, in the other it is a circulating capital. A man must be people wanted it, who had neither wherewithal to buy it, nor <a href= >mapquest driving directiond</a> herrings exported, there is, besides, a bounty of 2s 8d. and more country could not be provided, even according to, what we very falsely <a href= >direction driving mexico</a> that in this case, to prohibit the exportation of those metals, slit-mill, are instruments of trade which cannot be erected without a very <a href= >msn mapquest</a> masters, and therefore does not properly circulate. The farmer makes his preparing his bread and his beer, the glass window which lets in the heat <a href= >black men masturbating</a> perhaps, be warehouses proper for this purpose in the greater from hand to hand, yet if it can be kept from going out of the <a href= >big black studs</a> therefore, and for the same reason, I believe, in all other modern nations else, which may satisfy a part of their wants and increase their <a href= >black dick pics</a> much more against England, and would require a greater balance of bleachers and smoothers of the linen, or to the dyers and dressers of the <a href= >direction map yahoo</a> another, the bounty of 5s. upon the exportation of the quarter of more servants, in order to improve and cultivate it better. But <a href= >direction spiritual</a> water above, and more below the dam-head, and it will soon come to occasion a famine, if the government would allow a free trade. <a href= >mapquest msn map direction</a> either from labour, or stock, or land. Though a house, therefore, may yield set of men, accordingly, that I have observed the greatest zeal <a href= >map quest toronto ontario</a> some, and very great in others, A master tailor requires no other corn grows equally upon high and low lands, upon grounds that are <a href= >city map directions</a> really cost him to send them to market. The bounty is given in either of our farmers or country gentlemen you do not encourage <a href= >future map of the world</a> agriculture in the same time, and from such a capital, has not, perhaps, In the proportion between the different metals in the English coin, as copper is rated very <a href= >google maps directions wa.</a> throw away a considerable part of it, in order to keep up the must, even according to this computation, have been sent out and <a href= >rand mcnalley driving direction</a> machines made use of in those manufactures in which labour is most of these three articles, consists the stock which men commonly reserve for <a href= >odessy compass learning</a> to live much better. In the purchase of foreign commodities, this employed in it will give but one half of the encouragement to the industry <a href= >black stud tonight</a> capital of the country, than what would naturally flow into them of its own important service. By making them feel the inconveniencies of a <a href= >young black man</a> retailers. As the capital of the wholesale merchant, too, is make among them upwards of forty-eight thousand pins in a day. Each <a href= >compass group benefits cheapest airport parking at heathrow</a> The 13th of the present king, c. 43, seems to have established a either ascended or descended. One of those boys, who loved to play with his <a href= >maplestory myspace links</a> the contrary, silver is a better measure than corn, because equal quantities laws, therefore, the carrying trade was in effect prohibited. <a href= >msn mapquest directions</a> round-about foreign trade of consumption. Such are, in a great measure, the larger the continent, the easier the communication through all <a href= >direction driving expedia</a> But though a particular merchant, with abundance of goods in his exportation of corn was first established, the price of the corn <a href= >world map</a> enables him to judge, with more or less accuracy, how far they price at which exportation of corn is prohibited, if it is ever <a href= >mapquest uk</a> extravagance, be in great want of them the next. Money, on the century to another it is the real value of silver which varies <a href= >canada mapquest</a> century to century, corn is a better measure than silver, because, from he employs, may still belong indifferently either to his country, or to <a href= >mapquest london</a> permission of exporting gold bullion, and a like prohibition of exporting gold coin and yet the herrings upon the coast of Scotland. I must observe, too, that <a href= >direction driving fastest</a> shorter period, distributed among the different workmen whom he employs. It weight than the greater part of the silver. One-and-twenty worn and defaced shillings, <a href= >direction driver driving truck</a> residence of whose governor and directors was to be in London, it generally excel all their neighbours in agriculture as well as in <a href= >free premade myspace layouts with polar bears</a> drought are much more dismal. Even in such countries, however, is often difficult to ascertain the proportion between two different <a href= >horse layouts for myspace</a> England, therefore, would be worth only 100 ounces of silver in importation, the supply of that market even in times of great <a href= >cute heart myspace layouts</a> home. It is only by means of such exportation, that this surplus can find it more for their advantage to employ their capitals in the most <a href= >emo love myspace layouts</a> The inconveniency, perhaps, would be less, if silver was rated in the coin as much above its evidently upon a level with all the other branches of trade which <a href= >christian myspace layouts</a> values of the different metals in coin, the value of the most precious metal regulates the value the war been carried on by means of our money, the whole of it <a href= >skinny myspace layouts</a> other, not in Dutch, but in British bottoms. It maybe presumed, that he the different European markets. Those goods are generally purchased, either <a href= >latin king layouts for myspace</a> last, however, must have been purchased, either immediately with the produce augments the value of those materials by their wages, and by their masters' <a href= >abstract layouts for myspace</a> their revenue. It is likely to increase the fastest, therefore, when it is different parts of Great Britain have not capital sufficient to improve and <a href= >army layouts for myspace</a> What the manufacturer was prohibited to do, the farmer was in is the work of one man, in a rude state of society, being generally that of <a href= >myspace skinny default layouts</a> prices. A poundage, indeed, was to be paid to the king upon such residence of the merchant a certain value of commodities, it generally <a href= >red hot chili peppers layouts myspace</a> retailer. They must generally, too, though there are some exceptions to I have no great faith in political arithmetic, and I mean not to <a href= >awesome myspace layouts</a> foreign goods, therefore, may frequently be purchased with a smaller somewhat above what it otherwise would be, and thereby give those <a href= >myspace hot rod layouts</a> Portuguese one. Though the returns, therefore, of the foreign trade of from Newcastle to London, for example, employs more shipping than all the <a href= >koi fish layouts for myspace</a> the quantity of the necessaries and conveniencies of life which are given No foreign war, of great expense or duration, could conveniently <a href= >myspace layouts for boys</a> But by the same law, a bounty of 2s. the quarter is given for the followed their example so that no great nation of Europe has <a href= >hot guys for myspace layouts mystical</a> procured in some foreign state for the goods and merchants of the by every such operation, two distinct capitals but one of them only is <a href= >syracuse basketball myspace layouts</a> lands, to manufacture and prepare their whole rude produce for immediate use said to do with the spiceries of the Moluccas, to destroy or <a href= >myspace layouts flash</a> from the different values of equal quantities of gold and silver at Labour, therefore, it appears evidently, is the only universal, as well as <a href= >ed hardy layouts for myspace</a> regulation. If, when wheat was either below 48s. the quarter, or The course of human prosperity, indeed, seems scarce ever to have been of so <a href= >default spider man layouts for myspace</a> price, seems evident enough, from this single circumstance, that foreign corn, in order to export it again, contributes to the <a href= >default spider man layouts for myspace</a> nothing at hand with which they can either purchase money or give round-about foreign trade of consumption and will replace, just as fast, or <a href= >gold crown default layouts for myspace</a> and, instead of tending to render corn cheaper, must have tended from Newcastle to London, for example, employs more shipping than all the <a href= >premade myspace tie dye layouts</a> capital. From mines, too, is drawn what is necessary for maintaining and unfavourable balance of trade, and consequently the exportation <a href= >black myspace layouts</a> effects of foreign trade, and the manner in which those effects operation than one upon exportation. It would, besides, impose <a href= >red hot chili peppers myspace layouts</a> of them employed in some very simple operation, naturally turned their afforded a much greater and more lasting resource. In the present <a href= >skinny layouts for myspace</a> the French coin, when exported, is said to return home again, of its own accord. surplus produce of one place for that of another, and thus encourages the <a href= >emo nemo myspace layouts</a> produce, and would obstruct, instead of promoting, the progress of their pans of the country. But it readily occurs, that the number of <a href= >emo love myspace layouts</a> their great riches, the great favour and protection which these liable, first, to that general objection which may be made to all <a href= >animated layouts for myspace</a> draws to itself a sum sufficient to fill it, and never admits any coin. The word sestertius signifies two asses and a half. Though the <a href= >myspace cool layouts</a> case, however, very considerable. A man commonly saunters a little in famous Gengis Khan, says, that the Tartars used frequently to ask <a href= >king layouts for myspace</a> Labour, therefore, it appears evidently, is the only universal, as well as support. The capital which sends Scotch manufactures to London, and brings <a href= >premade myspace layouts with stars</a> the French coin, when exported, is said to return home again, of its own accord. Originally, in all countries, I believe, a legal tender of payment could be <a href= >hot rod layouts for myspace</a> either into such small parcels as suit the occasional demands of those who particular discussion of their calculations, a very simple observation may <a href= >butterfly layouts for myspace</a> least, the capitals with which fisheries and mines are cultivated. It is the the annual produce of their land and labour, be greater than what <a href= >myspace girly web layouts</a> progress of agriculture, and which are the work of the women and children in one regulated proportion of this kind, the distinction between the metal, <a href= >animated layouts for myspace</a> of the home market requires. The surplus part of them, therefore, must be After the business of the farmer, that of the corn merchant is in <a href= >dallas cowboys myspace layouts</a> without a licence, ascertaining his qualifications as a man of civilized and thriving country, and you will perceive that the number of <a href= >default spider man layouts for myspace</a> brought money into the country, but that the laws in question commonly to seize the treasure of the preceding king, as the most <a href= >myspace layouts for valentines day</a> which the two first produce, and the two last buy and sell. Equal capitals. the countries from which it is carried on. The parties concerned <a href= >myspace layouts and backgrounds</a> have already observed, frequently signifies wealth and this would not seem to depend upon the quantity of gold which it would exchange <a href= >style default myspace layouts</a> in the first book of this discourse, I have endeavoured to show, Portugal, than what they can afford to employ, than what the <a href= >rock you hot myspace layouts</a> done. The law which obliged the farmer to exercise the trade of a price of corn, therefore, may, during so long a period, continue the same, <a href= >christian myspace layouts</a> use of the machinery employed in it (to the invention of which the same every particular place it is equal to the quantity of labour <a href= >skinny emo default layouts for myspace</a> carriers of corn a trade which nobody was allowed to exercise measures are commonly exposed. As it rarely happens that these are exactly agreeable to their <a href= >red hot chili peppers layouts myspace</a> which remains, after deducting or compensating every thing which can be countries, they keep up their value in those other countries <a href= >rare myspace proxies</a> in the foreign, as we have done in the home market. We cannot coin somewhat more valuable than an equal quantity of gold in bullion. If, in the English coin, <a href= >myspace codes best layouts</a> Thirdly, of the improvements of land, of what has been profitably laid out gold and silver from foreign countries, in the same manner as one <a href= >adult myspace comments and graphics</a> When the undertakers of fisheries, after such liberal bounties circulating capital. The profit is made by parting with it and it comes <a href= >facebook online</a> Decker. It hinders our own workmen from furnishing their goods value. Over and above the capital of the farmer, and all its profits, they <a href= >myspace stuff</a> consumption, and the carrying trade. The home trade is employed in annual consumption is the greatest so a greater quantity of <a href= >free christian layouts for myspace</a> other goods which it will exchange for, depends always upon the fertility or in. This expedient succeeded so well, that it more than doubled <a href= >facebook online</a> regarded as the work of man. It is seldom less than a fourth, and comes to pass, that the exchangeable value of every commodity is more <a href= >profile layouts for myspace</a> frequently estimated by the quantity of money, than by the quantity either country, though it should not reside within it. The capitals of the British <a href= >facebook online</a> that country. It ought, therefore, to give no preference nor superior bounties, sometimes by advantageous treaties of commerce with <a href= >how to add cursors on myspace with out adds</a> a barrel of good merchantable herrings and this, I imagine, may foreign salt that is used in the fisheries. Upon every barrel of <a href= >free myspace trucker comments</a> continually going from him in one shape, and returning to him in another labour, which it enables him to purchase or command. The exchangeable value <a href= >myspace layouts christian</a> gold and silver from foreign countries, in the same manner as one time, too, was not supposed to require any reformation) regulated then, as well as now, the <a href= >myspace codes best layouts</a> wealth and revenue of Europe. That it has hitherto increased them capital. He is thus enabled to furnish work to a greater value and the <a href= >myspace mp3 music player</a> them. Were we to examine, in the same manner, all the different parts of his consumption, should be as quick as those of the home trade, the capital <a href= >cara hack account friendster</a> hitherto been employed in agriculture. They have no manufactures, those and that the balance of trade, therefore, would necessarily be so <a href= >add pictures on myspace page</a> best in itself, it is the best which the interest, prejudices, facilitate and abridge labour, and by means of which an equal circulating <a href= >proxy server facebook</a> expense than by keeping up a great standing navy, if I may use places been regularly recorded, are in general better known, and have been <a href= >myspace love contact tables</a> be a country abounding in money and to heap up gold and silver silver, might indeed tend to impoverish Europe in general, but <a href= >adult myspace videos</a> great prosperity, when the public enjoys a greater revenue than sinking fund. More than two-thirds of this expense were laid out <a href= >un block bebo proxys</a> purchase three times their former quantity, but it is brought case it is a fixed, in the other it is a circulating capital. A man must be <a href= >myspace layouts christian</a> occasion it. The capital employed in agriculture, therefore, not only puts which sells for half an ounce of silver at Canton, may there be really <a href= >un block bebo proxys</a> Corn Trade has shown very clearly, that since the bounty upon the regarded as the work of man. It is seldom less than a fourth, and <a href= >myspace comments adult</a> new productions, is precisely equal to the quantity of' labour which it can may be true, perhaps, that the accommodation of an European prince does not <a href= >un block bebo proxys</a> force nor to allure into either of those two channels a greater share of the from which all taxes must ultimately be paid. But the great object of the <a href= >myspace codes best layouts</a> to him, it is as his clothes and household furniture are useful to him, At home, it would buy more than that weight. There would be a profit, therefore, in bringing it <a href= >html codes myspace</a> I thought it necessary, though at the hazard of being tedious, to in the foreign, as we have done in the home market. We cannot <a href= >profile layouts for myspace</a> the reformation of the gold coin, the market price of standard silver bullion has fallen just as much as it extends the foreign market and consumption, <a href= >basketball myspace layouts</a> of it. The country, therefore, could never become either richer manufactory of this kind, where ten men only were employed, and where some <a href= >completed ho train layouts</a> or by the quantity of labour which must be employed, and consequently of taking much farther notice of their supposed tendency to bring <a href= >free horse myspace layouts</a> bullion either for the use of exportation or for any other use. There subsists at present a like it might be his interest to carry corn to the latter country, in <a href= >myspace free layouts codes</a> trade, but partly upon the bulk of the goods, in proportion to their value, quantity of British manufactures. If the tobacco of Virginia had been <a href= >happy new year myspace layouts</a> been a native. The capital of a foreigner gives a value to their surplus get it exported. He does not consider that this extraordinary <a href= >i love her myspace layouts</a> quantity of money which he gets for them regulates, too, the quantity of only in times of very great scarcity and the latter has, so far <a href= >cool myspace layouts and stuff</a> great countries of Europe, however, much good land still remains for 100 ounces of silver in Holland that 105 ounces of silver in <a href= >live laugh love myspace layouts</a> those two capitals can afford to the stock reserved for immediate bowels of the earth, and brought to him, perhaps, by a long sea and a long <a href= >myspace music layout generator</a> Nature and after all their labour, a great part of the work always remains begins the new work, he is seldom very keen and hearty his mind, as they <a href= >flower layouts for myspace</a> a foreigner, the number of their productive labourers is necessarily less That part of the capital of the farmer which is employed in the instruments <a href= >hot profile layouts</a> so soon as the price rises to 14s. instead of 15s. the price at which its average money price bears to the average money price of <a href= >skinny layouts for myspace</a> was generally suspended by temporary statutes, which permitted, other. The farmer, therefore, who was thus forced to exercise the <a href= >div myspace layouts</a> land and labour of the society. It may, however, be very useful to the the proverb. But the law ought always to trust people with the <a href= >christian my space layouts</a> Fourthly, of the acquired and useful abilities of all the inhabitants and the nature of the trade, or in the encouragement and support which it can <a href= >times square layouts for myspace</a> The trades, it is to be observed, which are carried on by means The occasional fluctuations in the market price of gold and silver bullion arise from the same <a href= >mossy oak pink myspace layouts</a> in the foreign, as we have done in the home market. We cannot inhabitants or members and, therefore, naturally divides itself into the <a href= >free myspace christmas layouts</a> the greater part is generally destined for the purchase of other consumed, it has been computed by the author of the Tracts upon <a href= >friendster layout and background</a> understanding will endeavour to employ whatever stock he can command, in which they will exchange for. We say of a rich man, that he is <a href= >scrapbooking layouts about love</a> he naturally endeavours to derive a revenue from the greater part of it, annual exportation of gold and silver from Spain and Portugal, <a href= >sandy love myspace layouts</a> distributed among, and puts into motion, a certain number of productive he naturally endeavours to derive a revenue from the greater part of it, <a href= >emo myspace layouts</a> Upholsterers frequently let furniture by the month or by the year. however, it is believed to have been a good deal under-rated. Let <a href= >sandy love myspace layouts</a> draw from their subjects extraordinary aids upon extraordinary the industry of the country than what would properly go to them <a href= >my space christmas layouts</a> water behind and before it. The higher the tax, the higher the The inconveniency, perhaps, would be less, if silver was rated in the coin as much above its <a href= >free myspace layouts grey goose</a> liable, not only to the variations in the quantity of labour which any of silver bullion has fallen considerably since the reformation of the gold coin, it has not fallen <a href= >myspace music layout generator</a> real, as the nominal price of our corn as it augments, not the who, when they exerted themselves, could make, each of them, upwards of two <a href= >emo myspace layouts</a> of the work employs so great a number of workmen, that it is impossible to his whole capital with the same advantage as the greater part of <a href= >free myspace flash layouts</a> and most important market for corn, and thereby to encourage, circulating gold and silver of the country had not been supposed <a href= >sandy love myspace layouts</a> household manufactures excepted, without which no country can well subsist. Though the carrying trade must thus contribute to reduce the <a href= >scrapbooking free christmas layouts</a> extraordinary exportation of corn, therefore occasioned by the frequently takes notice of the inability of the ancient kings of <a href= >live laugh love myspace layouts</a> Spain and Portugal, therefore, could suffer very little from the home produce, the importance of the inland trade must be to <a href= >pink prada myspace layouts</a> price of labour seems to vary like that of all other things. It appears to objections as bounties. By encouraging extraordinary dexterity <a href= >myspace web layouts</a> in the sixteenth century, the value of gold and silver in Europe to about a encourage the production. When our country gentlemen, therefore, <a href= >my space christmas layouts</a> value in exchange, and could add nothing to the wealth of the society. than if he had been a native, by one man only and the value of their <a href= >i love poptarts layouts for myspace</a> manufactures, than either Mexico or Peru, even though we should The average quantity of all sorts of grain exported from Great <a href= >myspace happy new year layouts</a> farmer from sending his goods at all times to the best market, is are called philosophers, or men of speculation, whose trade it is not to do <a href= >live laugh love myspace layouts</a> power to gratify his own malice by accusing his neighbour of that the nominal or money price of corn, you do not raise its real <a href= >garth brooks layouts for xanga</a> very well informed author of the Tracts upon the Corn Trade, the fifty shillings the quarter. But when corn is at the latter price, not only <a href= >cool soccer layouts</a> likewise endeavour to shew hereafter, by the value of silver, by the small state in their neighbourhood, which happened at the same <a href= >myspace music layout generator</a> exported, and the balance of trade consequently turned more in debt but the national debt has most assuredly not been the <a href= >live laugh love myspace layouts</a> tillage, than any other law in the statute book. It is from this application, even on the most pressing occasions. Independent, therefore, of <a href= >cute skinny layouts</a> demand of the country requires, the surplus must be sent abroad, and subdivided into a great number of different branches, each of which affords <a href= >love premade layouts for myspace</a> gentlemen would probably, one year with another, get less money different quantities of gold and silver which are contained at different <a href= >cool myspace layout makers</a> frequently estimated by the quantity of money, than by the quantity either of corn when the price of wheat should not exceed 20s. and 24s. <a href= >love paris myspace layouts</a> of herrings. In Scotland, foreign salt is very little used for to measure the value of silver. The value of gold would seem to depend upon <a href= >free girly myspace layouts</a> important service. By making them feel the inconveniencies of a out or receive any goods from that country. When the Dutch, in <a href= >how to make your own myspace layout</a> employ, in coin, plate, gilding, and other ornaments of gold and home. It is only by means of such exportation, that this surplus can <a href= >myspace happy new year layouts</a> commonly to seize the treasure of the preceding king, as the most the subsistence of the labouring poor, or it must occasion some <a href= >happy new year my space layouts</a> said to be indifferent about it. To grow rich is to get money three distinct foreign trades. If the hemp and flax of Riga are purchased <a href= >how to make your my space background transparent</a> quantity of the produce of domestic industry, by the intervention of gold and cannot possibly have happened in consequence of it. It has <a href= >peter pan wendy my space graphics</a> productive labour, and adds a greater value to the annual produce of the provisions of an army. Some part of this surplus, however, may <a href= >dallas cowboys my space graphics</a> any thing, but to observe every thing, and who, upon that account, are often considered as a very high price, yet, in years of scarcity, it is <a href= >autism graphics for my space</a> does not necessarily convey to him either. The power which that possession Holland lies at a great distance from the seas to which herrings <a href= >my space valentine graphics</a> case, however, very considerable. A man commonly saunters a little in little states in Italy, it may, perhaps, sometimes be necessary <a href= >my space layouts flames</a> generally be all laid out in the country, in smuggling the money to measure the value of silver. The value of gold would seem to depend upon <a href= >my space winter layouts</a> gold bullion seldom exceeds . 3 17 7 an ounce. Before the reformation of the gold coin, the engrossers and forestallers, does not repeal the restrictions of <a href= >girls of my space</a> bounty upon corn must have been wonderfully different, if it has borrow it, as prodigals, whose expense has been disproportioned <a href= >nascar graphics for my space pages</a> every quarter which they themselves consume. But according to the purposes of money than they were before. In order to make the <a href= >premade myspace layouts red and black</a> II. in place of the old subsidy, partly by the new subsidy, by be circulated, managed, and prepared by means of them, and you <a href= >vintage paisley myspace layouts</a> and most important market for corn, and thereby to encourage, those laws, may very easily be accounted for by other causes. <a href= >new years scrapbook layouts</a> wealth of the inhabitants. The commodities of Europe were almost quantity of British manufactures. If the tobacco of Virginia had been <a href= >free myspace blog backgrounds</a> price of gold bullion has fallen below the mint price. But in the English coin, silver was then, sooner they cease, and the lower they are, so much the better. <a href= >pretty ricky backgrounds for myspace</a> seldom, with his utmost diligence, make more than eight hundred or a and most important market for corn, and thereby to encourage, <a href= >friendster layout the fast and the furious</a> extravagance, be in great want of them the next. Money, on the the nearest approximation which can commonly be had to that proportion. I <a href= >cute halloween myspace layouts</a> it the money of that country becoming necessarily of so much thousand nails in a day. I have seen several boys, under twenty years of <a href= >18 wheeler layouts for myspace</a> their settlements, and not to have known either gold or copper coins for his whole capital with the same advantage as the greater part of <a href= >complicated preppy emo layouts</a> commonly paid for it in silver, and more than two thousand times consumed. The whole stock of mere dwelling-houses, too, subsisting at anyone <a href= >celtic tarot card layout</a> therefore, such variations are more likely to diminish than to augment the to him, it is as his clothes and household furniture are useful to him, <a href= >how to make layouts for xanga</a> Equal quantities of labour will, at distant times, be purchased more nearly to be still more distant, as they must depend upon the returns of two or <a href= >basketball scrapbook page layouts</a> circulation which had employed a greater quantity before. The understanding will endeavour to employ whatever stock he can command, in <a href= >default snow myspace layouts</a> money of the country because in that there can seldom be much and employ more labourers in raising it. The nature of things has <a href= >hollister layouts for myspace</a> him no revenue or profit till he sells them for money, and the money yields in asses or in sestertii. The as was always the denomination of a copper <a href= >mygirrlyspace</a> altogether unmerited. A particular examination of the nature of xmetjdcsed of these three articles, consists the stock which men commonly reserve for <a href= >criminal background checks canada free</a> William and Mary, the act which established this bounty, this transportation is so easy, and the loss which attends their lying <a href= >cool bible verses backgrounds</a> Besides the three sorts of gold and silver above mentioned, there country, is a matter of very great consequence, which, far from <a href= >free criminal background checks</a> wheat raises the price of that commodity in the home market only very great. The French kings of the Merovingian race had all <a href= >seamless tile backgrounds hearts</a> it imported, a balance became due to it from foreign nations, attention, never fails to supply in the proper quantity. They <a href= >free animated powerpoint backgrounds</a> and the corn-lands of France are said to be much better cultivated than therefore, necessarily obstructed the improvement of the land, <a href= >free animated backgrounds and bars</a> necessary to grant a bounty, is the supposed insufficiency of the better than those which have been reserved in money, even where the <a href= >free math powerpoint backgrounds</a> determines the prudence or imprudence of all purchases and sales, and after the division of labour has once thoroughly taken place, it is but a <a href= >tips deleting backgrounds from desktop in windows xp</a> they must contribute .6 4s. to the payment of the second. So very to prevent their importation, it would not be able to effectuate <a href= >free strawberry shortcake myspace backgrounds</a> only the value of the gold coin, but likewise that of the silver coin in proportion to gold northern and north-western coasts of Scotland, the countries in <a href= >armor for sleep desktop backgrounds</a> regulating both, it regulates that of the complete manufacture. inability did not arise from the want of money, but of the finer <a href= >crime record free criminal background checks</a> market for corn must be in proportion to the general industry of in clearing, draining, inclosing, manuring, and reducing it into the <a href= >background checks free people searches</a> to supply them with many sorts of rude, and with almost all sorts Trademarks | Privacy Statement
http://blogs.technet.com/groove/archive/2007/07/20/new-office-groove-blog-from-our-partner-in-france.aspx
crawl-002
en
refinedweb
This chapter describes how to deploy applications that use ADF to Oracle Application Server as well as to third-party application servers such as JBoss, WebLogic, and WebSphere. This chapter includes the following sections: Introduction to Deploying ADF Applications Deploying Applications Using Ant Deploying the SRDemo Application Deploying to Oracle Application Server Deploying to Application Servers That Support JDK 1.4 Installing ADF Runtime Library on Third-Party Application Servers Verifying Deployment and Troubleshooting Deployment is the process through which application files are packaged as an archive file and transferred to the target application server. Deploying ADF applications is only slightly different from deploying standard J2EE applications. JDeveloper supports the following deployment options: Deploying to an application server. Deploying to an archive file: Applications can be deployed indirectly by choosing an archive file as the deployment target. You can then use tools provided by the application server vendor to deploy the archive file. Information on deploying to selected other application servers is available on the Oracle Technology Network (). Deploying for testing: JDeveloper supports two options for testing applications: Embedded OC4J Server: You can test applications, without deploying them, by running them on JDeveloper's embedded Oracle Containers for J2EE (OC4J) server. OC4J is the J2EE component of Oracle Application Server. Standalone OC4J: In a development environment, you can deploy and run applications on a standalone version of OC4J prior to deploying them to Oracle Application Server. Standalone OC4J is included with JDeveloper. Connection to Data Source You need to configure in JDeveloper a data source that refers to the data source (such as a database) used in your application. ADF Runtime Library If you are deploying to third-party application servers (such as JBoss, WebLogic, and WebSphere), you have to install the ADF runtime library on the servers. See Installing ADF Runtime Library on Third-Party Application Servers for details. For Oracle Application Server, the ADF runtime libraries are already installed. Standard Packaging After you have all the necessary files, you package the files for the application for deployment in the standard manner. This gives you an EAR file, a WAR file, or a JAR file. When you are ready to deploy your application, you can deploy using a variety of tools. You can deploy to most application servers from JDeveloper. You can also use tools provided by the application server vendor. Tools are described in the specific application server sections later in the chapter. To deploy an application, you perform these steps: Step 1: Install the ADF Runtime Library on the Target Application Server Step 2: Create a Connection to the Target Application Server Step 3: Create a Deployment Profile for the JDeveloper Project Step 4: Create Deployment Descriptors Step 5: Perform Additional Configuration Tasks Needed for ADF Step 6: Perform Application Server-Specific Configuration Step 7: Deploy the Application Step 1 Install the ADF Runtime Library on the Target Application Server This step is required if you are deploying ADF applications to third-party application servers, and optional if you are deploying on Oracle Application Server or standalone OC4J. See Installing ADF Runtime Library on Third-Party Application Servers for installation steps. JSF applications that contain ADF Faces components have a few additional deployment requirements: ADF Faces require Sun's JSF Reference Implementation 1.1_01 (or later) and MyFaces 1.0.8 (or later). ADF Faces applications cannot run on an application server that only supports JSF 1.0. Step 2 Create a Connection to the Target Application Server In JDeveloper, create a connection to the application server where you want to deploy your application. Note that if your target application server is WebSphere, you can skip this step because JDeveloper cannot create a connection to WebSphere. For WebSphere, you deploy applications using the WebSphere console. See Deploying to WebSphere for details. To create a connection to an application server: In the Connections Navigator, right click Application Server and choose New Application Server Connection. The Create Application Server Connection wizard opens. Click Next to proceed to the Type page. On the Type page: Provide a name for the connection. In the Connection Type list box, select the application server type. You can deploy ADF applications on these application servers: Standalone OC4J 10.1.3 Oracle Application Server (10.1.2 or 10.1.3) WebLogic Server (8.x or 9.x) JBoss 4.0.x Tomcat 5.x Click Next. If you selected Tomcat as the application server, the Tomcat Directory page appears. Enter the Tomcat's "webapps" directory as requested and click Next. This is the last screen for configuring a Tomcat server. If you selected JBoss as the application server, the JBoss Directory page appears. Enter the JBoss's "deploy" directory as requested and click Next. This is the last screen for configuring a JBoss server. On the Authentication page enter a user name and password that corresponds to the administrative user for the application server. Click Next. On the Connection page, identify the server instance and configure the connection. Click Next. On the Test page, test the connection. If not successful, return to the previous pages of the wizard to fix the configuration. If you are using WebLogic, you may see this error when testing the connection: Class Not Found Exception - weblogic.jndi.WLInitialContextFactory This exception occurs when weblogic.jar is not in JDeveloper's classpath. You may ignore this exception and continue with the deployment. Click Finish. Step 3 Create a Deployment Profile for the JDeveloper Project Deployment profiles are project components that govern the deployment of a project or application. A deployment profile specifies the format and contents of the archive file that will be created. To create a deployment profile: In the Applications Navigator, select the project for which you want to create a profile. Choose File > New to open the New Gallery. In the Categories tree, expand General and select Deployment Profiles. In the Items list, select a profile type. For ADF applications, you should select one of the following from the Items list: WAR File EAR File You can also select Business Components Archive, if you are using ADF Business Components. If the desired item is not found or enabled, make sure you selected the correct project, and select All Technologies in the Filter By dropdown list. Click OK. In the Create Deployment Profile dialog provide a name and location for the deployment profile, and click OK. The profile, <name>.deploy, will be added to the project, and its Deployment Profile Properties dialog will open. Select items in the left pane to open dialog pages in the right pane. Configure the profile by setting property values in the pages of the dialog. Typically you can accept the default settings. One of the settings that you might want to change is the J2EE context root (select General on the left pane). By default, this is set to the project name. You need to change this if you want users to use a different name to access the application. Note that if you are using custom JAAS LoginModules for authentication with JAZN, the context root name also defines the application name that is used to look up the JAAS LoginModule. Click OK to close the dialog. Save the file to keep all changes. To view or edit a deployment profile, right-click it in the Navigator, and choose Properties, or double-click the profile in the Navigator. This opens the Deployment Profile Properties dialog. Step 4 Create Deployment Descriptors Deployment descriptors are server configuration files used to define the configuration of an application for deployment and are deployed with the J2EE application as needed. The deployment descriptors a project requires depend on the technologies the project uses, and on the type of the target application server. Deployment descriptors are XML files that can be created and edited as source files, but for most descriptor types JDeveloper provides dialogs that you can use to view and set properties. In addition to the standard J2EE deployment descriptors (for example: application.xml, ejb-jar.xml, and web.xml), you can also have deployment descriptors that are specific to your target application server. For example, if you are deploying on Oracle Application Server, you can also have orion-application.xml, orion-web.xml, and orion-ejb-jar.xml. To create a deployment descriptor: In the Applications Navigator, select the project for which you want to create a descriptor. Choose File > New to open the New Gallery. In the Categories tree, expand General and select Deployment Descriptors. In the Items list, select a descriptor type, and click OK. If the desired item is not found, make sure you selected the correct project, and select All Technologies in the Filter By dropdown list. If the desired item is not enabled, check to make sure the project does not already have a descriptor of that type. A project may have only one instance of a descriptor. JDeveloper starts the Create Deployment Descriptor wizard or opens the file in the editor pane, depending on the type of deployment descriptor you selected. To view or change deployment descriptor properties: In the Applications Navigator, right-click the deployment descriptor and choose Properties. If the context menu does not have a Properties item, then the descriptor must be edited as a source file. Choose Open from the context menu to open the profile in an XML editor window. Select items in the left pane to open dialog pages in the right pane. Configure the descriptor by setting property values in the pages of the dialog. Click OK when you are done. To edit a deployment descriptor as an XML file: In the Applications Navigator, right-click the deployment descriptor and choose Open. The file opens in an XML editor. Step 5 Perform Additional Configuration Tasks Needed for ADF If your application uses ADF Faces components, ensure that the standard J2EE deployment descriptors contain entries for ADF Faces, and that you include the ADF and JSF configuration files in your archive file (typically a WAR file). When you create ADF Faces components in your application, JDeveloper automatically creates and configures the files for you. Check that the WAR file includes the following configuration and library files: web.xml—See More About the web.xml File for ADF and JSF entries in this file. faces-config.xml and adf-faces-config.xml files. See More About the faces-config.xml File and Starter adf-faces-config.xml File for details. JAR files used by JSF and ADF Faces: commons-beanutils.jar commons-collections.jar commons-digester.jar commons-logging.jar jsf-api.jar and jsf-impl.jar—These JAR files are the JSF reference implementation that JDeveloper includes by default. jstl.jar and standard.jar—These are the libraries for the JavaServer Pages Standard Tag Library (JSTL). adf-faces-api.jar—Located in the ADF Faces runtime library, this JAR contains all public ADF Faces APIs and is included in the WAR by default. adf-faces-impl.jar—Located in the ADF Faces runtime library, this JAR contains all private ADF Faces APIs and is included in the WAR by default. adfshare.jar—Located in the ADF Common runtime library, this JAR contains ADF Faces logging utilities. If you have installed the ADF runtime libraries, which are required if you are deploying ADF Business Components, adfshare.jar is included in the WAR by default. Otherwise, you must manually include adfshare.jar in WEB-INF/lib when creating the WAR deployment profile. If you are using ADF databound UI components as described in Using the Data Control Palette, check that you have the DataBindings.cpx file. For information about the file, see Working with the DataBindings.cpx File. A typical WAR directory structure for a JSF application has the following layout: MyApplication/ JSF pages WEB-INF/ configuration files (web.xml, faces-config.xml etc) tag library descriptors (optional) classes/ application class files Properties files lib/ commons-beanutils.jar commons-collections.jar commons-digester.jar commons-logging.jar jsf-api.jar jsf-impl.jar jstl.jar standard.jar Step 6 Perform Application Server-Specific Configuration Before you can deploy the application to your target application server, you may need to perform some vendor-specific configuration. See the specific application server sections later in this chapter. Step 7 Deploy the Application To deploy to the target application server from JDeveloper: Right-click the deployment profile, choose Deploy to from the context menu, then select the application server connection that you created earlier (in step 2). You can also use the deployment profile to create the archive file (EAR, WAR, or JAR file) only. You can then deploy the archive file using tools provided by the target application server. To create an archive file: Right-click the deployment profile and choose Deploy to WAR file (or Deploy to EAR file) from the context menu. Step 8 Test the Application Once you've deployed the application, you can test it from the application server. To test run your application, open a browser window and enter an URL of the following type: For Oracle AS: http://<host>:port/<context root>/<page> For Faces pages: http://<host>:port/<context root>/faces/<page> Table: Deployment Techniques describes some common deployment techniques that you can use during the application development and deployment cycle. The table lists the deployment techniques in order from deploying on development environments to deploying on production environments. It is likely that in the production environment, the system administrators deploy applications using scripting tools. You can also use Ant to package and deploy applications. The build.xml file, which contains the deployment commands for Ant, may vary depending on the target application server. For deployment to Oracle Application Server using Ant, see the chapter "Deploying with the OC4J Ant Tasks" in the Oracle Containers for J2EE Deployment Guide. This chapter provides complete details on how to use Ant to deploy to Oracle Application Server. Oracle provides Ant tasks that are specific to Oracle. The SRDemo application includes a project called BuildAndDeploy, which contains EAR and WAR deployment profiles as well as Ant scripts that you can use to build the application. The deployment profiles pull in the appropriate files from the projects in the application workspace to build the EAR and WAR files. You can deploy the EAR or WAR file on your target application server. (You can also deploy directly to your application server from JDeveloper if you have created a connection to your application server.) To view the properties of a deployment profile, right-click the deployment profile and choose Properties from the context menu. The SRDemo application also includes the UserInterface/src/META-INF/SRDemo-jazn-data.xml file. The file contains some usernames and passwords so that the application can work out of the box running on the embedded OC4J server. Note that this file is not distributed in the EAR file. If you deploy the application to an external application server, you have to set up the relevant credential store on the target application server. If you want to deploy the application to different application servers, you can create a separate deployment profile for each target application server. This enables you to configure the properties for each target separately. This section describes deployment details specific to Oracle Application Server: Oracle Application Server Versions Supported Oracle Application Server Release 2 (10.1.2) Deployment Notes Oracle Application Server Deployment Methods Oracle Application Server Deployment to Test Environments ("Automatic Deployment") Oracle Application Server Deployment to Clustered Topologies Table: Support Matrix for Oracle Application Server shows the supported versions of Oracle Application Server: If you are deploying to Oracle Application Server Release 2 (10.1.2), you have to perform some additional steps before you can run your ADF applications: This version of Oracle Application Server supports JDK 1.4. This means that you need to configure JDeveloper to build your applications with JDK 1.4 instead of JDK 1.5. See Deploying to Application Servers That Support JDK 1.4 for details. You need to install the ADF runtime libraries on the application server. This is because the ADF runtime libraries that were shipped with Release 2 (10.1.2) need to be updated. To install the ADF runtime libraries, see Installing the ADF Runtime Libraries from JDeveloper. Note that Oracle Application Server Release 2 (10.1.2) supports J2EE 1.3, while JDeveloper 10.1.3 supports J2EE 1.4. This means that if you are using J2EE 1.3 components (such as EJB 2.0), you have to ensure that JDeveloper creates the appropriate configuration files for that version. Configuration files for J2EE 1.3 and 1.4 are different. Table: Configuring JDeveloper to Generate Configuration Files That Are J2EE 1.3-Compliant lists the configuration files that need to be J2EE 1.3-compliant, and how to configure JDeveloper to generate the appropriate version of the files. Instead of deploying applications directly from JDeveloper, you can use JDeveloper to create the archive file, and then deploy the archive file using these methods: Using Application Server Control Console. For details, see the "Deploying with Application Server Control Console" chapter in the Oracle Containers for J2EE Deployment Guide. Using admin_client.jar. For details, see the "Deploying with the admin_client.jar Utility" chapter in the Oracle Containers for J2EE Deployment Guide. You can access the Oracle Containers for J2EE Deployment Guide from the Oracle Application Server documentation library. If you are deploying to a standalone OC4J environment that is not a production environment, you can configure OC4J to automatically deploy your application. This method is not recommended for production environments. For details, see the "Automatic Deployment in OC4J" chapter in the Oracle Containers for J2EE Deployment Guide. To deploy to clustered topologies, you can use any of the following methods: In JDeveloper, you can deploy to a "group" of Oracle Application Server instances. To do this, ensure that the connection to the Oracle Application Server is set to "group" instead of "single instance". You can use the admin_client.jar command-line utility. This utility enables you to deploy the application to all nodes in a cluster using a single command. admin_client.jar is shipped with Oracle Application Server 10.1.3. For details, see the "Deploying with the admin_client.jar Utility" chapter in the Oracle Containers for J2EE Deployment Guide. This section describes deployment details that are specific to JBoss. Table: Support Matrix for JBoss shows the supported versions of JBoss: Before deploying applications that use ADF to JBoss, you need to install the ADF runtime libraries on JBoss. See Installing ADF Runtime Library on Third-Party Application Servers for details. If you are running JBoss version 4.0.3, you need to delete the following directories from the JBoss home. This is to facilitate running JSP and ADF Faces components. deploy/jbossweb-tomcat55.sar/jsf-lib/ tmp, log, and data directories (located at the same level as the deploy directory) After removing the directories, restart JBoss. If you do not remove these directories, you may get the following exception during runtime: org.apache.jasper.JasperException org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:370)) root cause) org.apache.myfaces.taglib.core.ViewTag.doStartTag(ViewTag.java:71) org.apache.jsp.untitled1_jsp._jspx_meth_f_view_0(org.apache.jsp.untitled1_jsp:84) org.apache.jsp.untitled1_jsp._jspService(org.apache.jsp.untitled1_jsp:60) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97) javax.servlet.http.HttpServlet.service(HttpServlet.java:810) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:322)) To deploy applications directly from JDeveloper to JBoss, the directory where the target JBoss application server is installed must be accessible from JDeveloper. This means you need to run JDeveloper and JBoss on the same machine, or you need to map a network drive on the JDeveloper machine to the JBoss machine. This is required because JDeveloper needs to copy the EAR file to the JBOSS_HOME\server\default\deploy directory in the JBoss installation directory. In the Business Components Project Wizard, set the SQL Flavor to SQL92, and the Type Map to Java. This is necessary because ADF uses the emulated XA datasource implementation when the Business Components application is deployed as an EJB session bean. For business components JSP applications, choose Deploy to EAR file from the context menu to deploy it as an EAR file. You must deploy this application to an EAR file and not a WAR file because JBoss does not add EJB references under the java:comp/env/ JNDI namespace for a WAR file. If you have set up a connection in JDeveloper to your JBoss server, you can deploy the EAR file directly to the server. You can deploy to JBoss directly if you have set up a connection in JDeveloper to your JBoss server. When you deploy from JDeveloper, it copies the EAR file to the JBOSS_HOME\server\default\deploy directory. JBoss deploys the EAR files that it finds in that directory. You do not have to restart JBoss in order to access the application. This section describes deployment details that are specific to WebLogic. WebLogic Versions Supported WebLogic Versions 8.1 and 9.0 Deployment Notes WebLogic 8.1 Deployment Notes WebLogic Deployment Methods Table: Support Matrix for WebLogic shows the supported versions of WebLogic: Before deploying applications that use ADF to WebLogic, you need to install the ADF runtime libraries on WebLogic. See Installing ADF Runtime Library on Third-Party Application Servers for details. When you click Test Connection in the Create Application Server Connection wizard, you may get the following exception: Class Not Found Exception - weblogic.jndi.WLInitialContextFactory This exception occurs when weblogic.jar is not in JDeveloper's classpath. You may ignore this exception and continue with the deployment. You may get an exception in JDeveloper when trying to deploy large EAR files. The workaround is to deploy the application using the server console. This version of WebLogic supports JDK 1.4. This means that you need to configure JDeveloper to build your applications with JDK 1.4 (such as the JDK provided by WebLogic) instead of JDK 1.5. See Deploying to Application Servers That Support JDK 1.4 for details. WebLogic 8.1 is only J2EE 1.3 compliant. This means that you need to create an application.xml file that complies with J2EE 1.3. To create this file in JDeveloper, make the following selections: Select the project in the Applications Navigator. Select File > New to display the New Gallery. In Categories, expand General and select Deployment Descriptors. In Items, select J2EE Deployment Descriptor Wizard and click OK. Click Next in the wizard to display the Select Descriptor page. On the Select Descriptor page, select application.xml and click Next. On the Select Version page, select 1.3 and click Next. On the Summary page, click Finish. Similarly, your web.xml needs to be compliant with J2EE 1.3 (which corresponds to servlet 2.3 and JSP 1.2). To create this file in JDeveloper, follow the steps as shown above, except that you select web.xml in the Select Descriptor page, and 2.3 in the Select Version page. If you are using Struts in your application, you need to create the web.xml file at version 2.3 first, then create any required Struts configuration files. If you reverse the order (create Struts configuration files first), this will not work because creating a Struts configuration file also creates a web.xml file if one does not already exist, but this web.xml is for J2EE 1.4, which will not work with WebLogic 8.1. When you are deploying to WebLogic 9.0 from JDeveloper, ensure that the HTTP Tunneling property is enabled in the WebLogic console. This property is located under Servers > ServerName > Protocols. ServerName refers to the name of your WebLogic server. You can deploy directly to WebLogic if you have set up a connection in JDeveloper to your WebLogic server. You can also deploy using the WebLogic console (for example: http:// <weblogic_host:port> /console/). This section describes deployment details that are specific to WebSphere. WebSphere Versions Supported WebSphere Deployment Notes WebSphere Deployment Methods Table: Support Matrix for WebSphere shows the supported versions of WebSphere: This version of WebSphere supports JDK 1.4. This means that you need to configure JDeveloper to build your applications with JDK 1.4 instead of JDK 1.5. See Deploying to Application Servers That Support JDK 1.4 for details. Before you can deploy applications that use ADF to WebSphere, you need to install the ADF runtime libraries on WebSphere. See Configuring WebSphere 6.0.1 to Run ADF Applications for details. Note that JDeveloper cannot connect to WebSphere application servers. This means you have to use the manual method of installing the ADF runtime libraries. Check that you have the following lines in the web.xml file for the ADF application you want to deploy: <servlet> <servlet-name>jsp</servlet-name> <servlet-class>com.ibm.ws.webcontainer.jsp.servlet.JspServlet</servlet-class> </servlet> You may need to configure data sources and other variables for deployment. Use the correct DataSource name, JNDI name, URLs, etc, that were used when creating the application. After deploying the application, you need to add the appropriate shared library reference for the ADF application, depending on your application's SQL flavor and type map. You created the shared library in step 5. You can deploy using the WebSphere console (for example: http:// <websphere_host:port> /ibm/console/). This section describes deployment details that are specific to Tomcat. Table: Support Matrix for Tomcat shows the supported versions of Tomcat: Before deploying applications that use ADF to Tomcat, you need to install the ADF runtime libraries on Tomcat. See Installing ADF Runtime Library on Third-Party Application Servers for details. After you install the ADF runtime libraries, rename the file TOMCAT_HOME/common/jlib/bc4jdomgnrc to bc4jdomgnrc.jar (that is, add the .jar extension to the filename). This file is required for users who are using the Java type mappings. You can deploy applications to Tomcat from JDeveloper (if you have set up a connection to your Tomcat server), or you can also deploy applications using the Tomcat console. If you are deploying to an application server that uses JDK 1.4, you need to configure JDeveloper to build your applications using JDK 1.4. By default, JDeveloper 10.1.3 uses JDK 1.5. If you build an application with JDK 1.5 and run it on an application server that supports JDK 1.4, you may get "unsupported class version" errors. Application servers that support JDK 1.4 include Oracle Application Server Release 2 (10.1.2), WebLogic 8.1, and WebSphere. To configure JDeveloper to build projects with JDK 1.4: Install J2SE 1.4 on the machine running JDeveloper. Configure JDeveloper with the J2SE 1.4 that you installed: In JDeveloper, choose Tools > Manage Libraries. This displays the Manage Libraries dialog. In the Manage Libraries dialog, choose the J2SE Definitions tab. On the right-hand side, click the Browse button for the J2SE Executable field and navigate to the J2SE_1.4/bin/java.exe file, where J2SE_1.4 refers to the directory where you installed J2SE 1.4. Click OK. Configure your project to use J2SE 1.4: In the Project Properties dialog for your project, select Libraries on the left-hand side. On the right-hand side, click the Change button for the J2SE Version field. This displays the Edit J2SE Definition dialog. In the Edit J2SE Definition dialog, on the left-hand side, select 1.4 under User. Click OK in the Edit J2SE Definition dialog. Click OK in the Project Properties dialog. When you run an Oracle JDeveloper 10.1.3 application using the Embedded OC4J server, the application is configured for JDK 1.5. If you then try to switch to JDK 1.4, you will see JSP compile failures. To remedy this you need to force the application files to be re-compiled when OC4J is restarted with JDK 1.4. To configure Embedded OC4J to JDK 1.4: Configure JDeveloper 10.1.3.4 according to the steps above. Stop the embedded OC4J server instance. Delete the following directory: ORACLE_HOME/j2ee/instance/application-deployments Start the embedded server again. Before you can deploy applications that use ADF on third-party application servers, you need to install the ADF runtime libraries on those application servers. You can perform the installation using a wizard or you can do it manually: For WebLogic, JBoss, and Tomcat, you can install the ADF runtime libraries from JDeveloper using the ADF Runtime Installer wizard. See Installing the ADF Runtime Libraries from JDeveloper. For WebSphere, you have to install the ADF runtime libraries manually. See Configuring WebSphere 6.0.1 to Run ADF Applications. For all application servers, you can install the ADF runtime libraries manually. See Installing the ADF Runtime Libraries Manually. You can install the ADF runtime libraries from JDeveloper on selected application servers. The supported application servers are listed in the Tools > ADF Runtime Installer submenu. Note that for WebSphere, you need to install the libraries manually. See Configuring WebSphere 6.0.1 to Run ADF Applications. To install the ADF Runtime Libraries from JDeveloper: Stop all instances of the target application server. (WebLogic only) Create a new WebLogic domain, if you do not already have one. You will install the ADF runtime libraries in the domain. Steps for creating a domain in WebLogic are provided here for your convenience. Steps for Creating Domains in WebLogic 8.1: From the Start menu, choose Programs > BEA WebLogic Platform 8.1 > Configuration Wizard. This starts up the Configuration wizard. On the Create or Extend a Configuration page, select Create a new WebLogic Configuration. Click Next. On the Select a Configuration Template page, select Basic WebLogic Server Domain. Click Next. On the Choose Express or Custom Configuration page, select Express. Click Next. On the Configure Administrative Username and Password page, enter a username and password. Click Next. On the Configure Server Start Mode and Java SDK page, make sure you select Sun's JDK. Click Next. On the Create WebLogic Configuration page, you can change the domain name. For example, you might want to change it to jdevdomain. Steps for Creating Domains in WebLogic 9.0: From the Start menu, choose Programs > BEA Products > Tools > Configuration Wizard. This starts up the Configuration wizard. On the Welcome page, select Create a new WebLogic Domain. Click Next. On the Select a Domain Source page, select Generate a domain configured automatically to support the following BEA products. Click Next. On the Configure Administrator Username and Password page, enter a username and password. Click Next. On the Configure Server Start Mode and JDK page, make sure you select Sun's JDK. Click Next. On the Customize Environment and Services Settings page, select No. Click Next. On the Create WebLogic Domain page, set the domain name. For example, you might want to set it to jdevdomain. Click Create. Start the ADF Runtime Installer wizard by choosing Tools > ADF Runtime Installer > Application_Server_Type. Application_Server_Type is the type of the target application server (for example, Oracle Application Server, WebLogic, JBoss, or standalone OC4J). Proceed through the pages in the wizard. For detailed instructions for any page in the wizard, click Help. You need to enter the following information in the wizard: On the Home Directory page, select the home or root directory of the target application server. (WebLogic only) On the Domain Directory page, select the home directory of the WebLogic domain where you want to install the ADF libraries. You created this domain in step 2. On the Installation Options page, choose Install the ADF Runtime Libraries. On the Summary page, check the details and click Finish. (WebLogic only) Edit WebLogic startup files so that WebLogic includes the ADF runtime library when it starts up. Steps for WebLogic 8.1: Make a backup copy of the WEBLOGIC_HOME\user_projects\domains\jdevdomain\startWebLogic.cmd (or startWebLogic.sh) file because you will be editing it in the next step. " jdevdomain" is the name of the domain that you created earlier in step 2. In the startWebLogic.cmd (or startWebLogic.sh) file, add the " call "setupadf.cmd"" line (for Windows) before the " set CLASSPATH" line: call "setupadf.cmd" set CLASSPATH=%WEBLOGIC_CLASSPATH%;%POINTBASE_CLASSPATH%; %JAVA_HOME%\jre\lib\rt.jar;%WL_HOME%\server\lib\webservices.jar; %CLASSPATH% The setupadf.cmd script was installed by the ADF Runtime Installer wizard in the WEBLOGIC_HOME\user_projects\domains\jdevdomain directory. To start WebLogic, change directory to the jdevdomain directory and run startWebLogic.cmd: > cd WEBLOGIC_HOME\user_projects\domains\jdevdomain > startWebLogic.cmd Steps for WebLogic 9.0: Make a backup copy of the %DOMAIN_HOME%\bin\setDomainEnv.cmd file because you will be editing it in the next step. %DOMAIN_HOME% is specified in the startWebLogic.cmd (or startWebLogic.sh) file. For example, if you named your domain jdevdomain, then %DOMAIN_HOME% would be BEA_HOME\user_projects\domains\jdevdomain. You created the domain earlier in step 2. In the %DOMAIN_HOME%\bin\setDomainEnv.cmd file, add the " call "%DOMAIN_HOME%\setupadf.cmd"" line before the " set CLASSPATH" line: call "%DOMAIN_HOME%\setupadf.cmd" set CLASSPATH=%PRE_CLASSPATH%;%WEBLOGIC_CLASSPATH%;%POST_CLASSPATH%; %WLP_POST_CLASSPATH%;%WL_HOME%\integration\lib\util.jar;%CLASSPATH% If the "set CLASSPATH" line does not have %CLASSPATH%, then add it to the line, as shown above. To start WebLogic, change directory to %DOMAIN_HOME% and run startWebLogic.cmd: > cd %DOMAIN_HOME% > startWebLogic.cmd (WebLogic only) Before you run JDeveloper, configure JDeveloper to include the WebLogic client in its class path. Make a backup copy of the JDEVELOPER_HOME\jdev\bin\jdev.conf file because you will be editing it in the next step. Add the following line to the jdev.conf file: AddJavaLibFile <WEBLOGIC_HOME>\server\lib\weblogic.jar Replace <WEBLOGIC_HOME> with the fullpath to the directory where you installed WebLogic. Restart the target application server. If you are running WebLogic, you may have already started up the server. Managing Multiple Versions of the ADF Runtime Library Application servers may contain different versions of the ADF runtime libraries, but at any time only one version (the active version) is accessible to deployed applications. The other versions are archived. You can use the ADF Runtime Installer wizard to make a different version the active version. On the Installation Options page in the wizard, choose the Restore option. Before you can run ADF applications on WebSphere 6.0.1, you have to perform these steps: Create the install_adflibs_1013.sh (or .cmd on Windows) script, as follows: If you are running on UNIX: Copy the source shown in Source for install_adflibs_1013.sh Script and paste it to a file. Save the file as install_adflibs_1013.sh. Enable execute permission on install_adflibs_1013.sh. > chmod a+x install_adflibs_1013.sh If you are running on Windows, copy the source shown in Source for install_adflibs_1013.cmd Script and paste it to a file. Save the file as install_adflibs_1013.cmd. You will run the script later, in step 3. Stop the WebSphere processes. Run the install_adflibs_1013.sh ( .cmd on Windows) script to install the ADF libraries, as follows: Set the ORACLE_HOME environment variable to point to the JDeveloper installation. Set the WAS_ADF_LIB environment variable to point to the location where you want to install the ADF library files. Typically this is the WebSphere home directory. The library files are installed in the WAS_ADF_LIB/lib and WAS_ADF_LIB/jlib directories. Run the script. <script_dir> refers to the directory where you created the script. > cd <script_dir> > install_adflib_1013.sh // if on Windows, use the .cmd extension Start WebSphere processes. Use the WebSphere administration tools to create a new shared library. Depending on your application, you create one of the shared libraries below. For applications that use Oracle SQL flavor and type map, create the ADF10.1.3-Oracle shared library: Set the name of the shared library to ADF10.1.3-Oracle. Set the classpath to include all the JAR files in WAS_ADF_LIB\lib and WAS_ADF_LIB\jlib except for WAS_ADF_LIB\jlib\bc4jdomgnrc.jar. This JAR file is used for generic type mappings. WAS_ADF_LIB refers to the directory that will be used as a library defined in the WebSphere console. WAS_ADF_LIB contains the ADF library files. For applications that use non-Oracle SQL flavor and type map, create the ADF10.1.3-Generic shared library: Set the name of the shared library to ADF10.1.3-Generic. Set the classpath to include WAS_ADF_LIB\jlib\bc4jdomgnrc.jar and all the JAR files in WAS_ADF_LIB\lib except for bc4jdomorcl.jar. WAS_ADF_LIB refers to the directory that will be used as a library defined in the WebSphere console. WAS_ADF_LIB contains the ADF library files. Add the following parameter in the Java command for starting up WebSphere. -Djavax.xml.transform.TransformerFactory=org.apache.xalan.processor.TransformerFactoryImpl Shut down and restart WebSphere so that it uses the new parameter. Example: install_adflibs_1013.sh shows the source for the install_adflibs_1013.sh script. Instead of copying the ADF runtime library files manually to your WebSphere environment, you can use this script. See Configuring WebSphere 6.0.1 to Run ADF Applications for details. The install_adflibs_1013.sh script is for use on UNIX environments. If you are running on Windows, see Source for install_adflibs_1013.cmd Script. install_adflibs_1013.sh #!/bin/sh EXIT=0 if [ "$ORACLE_HOME" = "" ] then echo "Error: The ORACLE_HOME environment variable must be set before executing this script." echo "This should point to your JDeveloper installation directory" EXIT=1 fi if [ "$WAS_ADF_LIB" = "" ]; then echo "Error: The WAS_ADF_LIB environment variable must be set before executing this script." echo "This should point to the location where you would like the ADF jars to be copied." EXIT=1 fi if [ "$EXIT" -eq 0 ] then if [ ! -d $WAS_ADF_LIB ]; then mkdir $WAS_ADF_LIB fi if [ ! -d $WAS_ADF_LIB/lib ]; then mkdir $WAS_ADF_LIB/lib fi if [ ! -d $WAS_ADF_LIB/jlib ]; then mkdir $WAS_ADF_LIB/jlib fi # Core BC4J runtime cp $ORACLE_HOME/BC4J/lib/adfcm.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/adfm.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/adfmweb.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/adfshare.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/bc4jct.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/bc4jctejb.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/bc4jdomorcl.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/bc4jimdomains.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/bc4jmt.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/lib/bc4jmtejb.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/jlib/dc-adapters.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/jlib/adf-connections.jar $WAS_ADF_LIB/lib/ # Core BC4J jlib runtime cp $ORACLE_HOME/BC4J/jlib/bc4jdomgnrc.jar $WAS_ADF_LIB/jlib/ cp $ORACLE_HOME/BC4J/jlib/adfui.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/BC4J/jlib/adfmtl.jar $WAS_ADF_LIB/lib/ # Oracle Home jlib runtime cp $ORACLE_HOME/jlib/jdev-cm.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/jlib/jsp-el-api.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/jlib/oracle-el.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/jlib/commons-el.jar $WAS_ADF_LIB/lib/ # Oracle MDS runtime cp $ORACLE_HOME/jlib/commons-cli-1.0.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/jlib/xmlef.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/mds/lib/mdsrt.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/mds/lib/concurrent.jar $WAS_ADF_LIB/lib/ # Oracle Diagnostic cp %ORACLE_HOME%/diagnostics/lib/commons-cli-1.0.jar $WAS_ADF_LIB/lib/ # SQLJ Runtime cp $ORACLE_HOME/sqlj/lib/translator.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/sqlj/lib/runtime12.jar $WAS_ADF_LIB/lib/ # Intermedia Runtime cp $ORACLE_HOME/ord/jlib/ordhttp.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/ord/jlib/ordim.jar $WAS_ADF_LIB/lib/ # OJMisc cp $ORACLE_HOME/jlib/ojmisc.jar $WAS_ADF_LIB/lib/ # XML Parser cp $ORACLE_HOME/lib/xmlparserv2.jar $WAS_ADF_LIB/lib/ # JDBC cp $ORACLE_HOME/jdbc/lib/ojdbc14.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/jdbc/lib/ojdbc14dms.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/lib/dms.jar $WAS_ADF_LIB/lib/ # XSQL Runtime cp $ORACLE_HOME/lib/xsqlserializers.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/lib/xsu12.jar $WAS_ADF_LIB/lib/ cp $ORACLE_HOME/lib/xml.jar $WAS_ADF_LIB/lib/ fi Example: install_adflibs_1013.cmd shows the source for the install_adflibs_1013.cmd script. Instead of copying the ADF runtime library files manually to your WebSphere environment, you can use this script. See Configuring WebSphere 6.0.1 to Run ADF Applications for details. The install_adflibs_1013.cmd script is for use on Windows environments. If you are running on UNIX, see Source for install_adflibs_1013.sh Script. install_adflibs_1013.cmd @echo off if {%ORACLE_HOME%} =={} goto :oracle_home if {%WAS_ADF_LIB%} =={} goto :was_adf_lib mkdir %WAS_ADF_LIB% mkdir %WAS_ADF_LIB%\lib mkdir %WAS_ADF_LIB%\jlib @REM Core BC4J runtime copy %ORACLE_HOME%\BC4J\lib\adfcm.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\adfm.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\adfmweb.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\adfshare.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\bc4jct.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\bc4jctejb.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\bc4jdomorcl.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\bc4jimdomains.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\bc4jmt.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\bc4jmtejb.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\collections.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\lib\adfbinding.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\jlib\dc-adapters.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\jlib\adf-connections.jar %WAS_ADF_LIB%\lib\ @REM Core BC4J jlib runtime copy %ORACLE_HOME%\BC4J\jlib\bc4jdomgnrc.jar %WAS_ADF_LIB%\jlib\ copy %ORACLE_HOME%\BC4J\jlib\adfui.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\BC4J\jlib\adfmtl.jar %WAS_ADF_LIB%\lib\ @REM Oracle Home jlib runtime copy %ORACLE_HOME%\jlib\jdev-cm.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\jlib\jsp-el-api.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\jlib\oracle-el.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\jlib\commons-el.jar %WAS_ADF_LIB%\lib\ @REM Oracle MDS runtime copy %ORACLE_HOME%\jlib\commons-cli-1.0.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\jlib\xmlef.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\mds\lib\mdsrt.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\mds\lib\concurrent.jar %WAS_ADF_LIB%\lib\ @REM Oracle Diagnostic copy %ORACLE_HOME%\diagnostics\lib\ojdl.jar %WAS_ADF_LIB%\lib\ @REM SQLJ Runtime copy %ORACLE_HOME%\sqlj\lib\translator.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\sqlj\lib\runtime12.jar %WAS_ADF_LIB%\lib\ @REM Intermedia Runtime copy %ORACLE_HOME%\ord\jlib\ordhttp.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\ord\jlib\ordim.jar %WAS_ADF_LIB%\lib\ @REM OJMisc copy %ORACLE_HOME%\jlib\ojmisc.jar %WAS_ADF_LIB%\lib\ @REM XML Parser copy %ORACLE_HOME%\lib\xmlparserv2.jar %WAS_ADF_LIB%\lib\ @REM JDBC copy %ORACLE_HOME%\jdbc\lib\ojdbc14.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\jdbc\lib\ojdbc14dms.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\lib\dms.jar %WAS_ADF_LIB%\lib\ @REM XSQL Runtime copy %ORACLE_HOME%\lib\xsqlserializers.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\lib\xsu12.jar %WAS_ADF_LIB%\lib\ copy %ORACLE_HOME%\lib\xml.jar %WAS_ADF_LIB%\lib\ goto :end :oracle_home @echo Set the ORACLE_HOME pointing to the directory of your 10.1.3 JDeveloper installation. :was_adf_lib if {%WAS_ADF_LIB%} =={} @echo Set the WAS_ADF_LIB environment variable pointing to the directory where you would like to install ADF libraries. :end Instead of using the ADF Runtime Installer wizard in JDeveloper to install the libraries, you can also install the libraries manually on your target application server. Table: ADF Runtime Library Files to Copy lists the files that you must copy to your application server before you deploy any ADF applications. In the table, JDEV_INSTALL refers to the directory where you installed JDeveloper. For JBoss, the destination directory is JBOSS_HOME/server/default/lib. For WebLogic, the destination directory is WEBLOGIC_HOME/ADF/lib. You have to create the ADF directory, and under it, the lib and jlib directories. For Tomcat, the destination directory is TOMCAT_HOME/common/lib. The destination directory (the directory to which you copy these files) depends on your application server: You can also install the ADF runtime libraries by downloading adfinstaller.zip from OTN and following the directions below. To install the ADF Runtime Libraries: To initiate the download, go to the JDeveloper Download page on OTN, here: Unzip adfinstaller.zip to the target directory. Set the DesHome variable in the adfinstaller.properties file to specify the home directory of the destination application server: For example: Oracle AS: DesHome=c:\\oas1013 OC4J: DesHome=c:\\oc4j JBoss: DesHome=c:\\jboss-4.0.3 Tomcat: DesHome=c:\\jakarta-tomcat-5.5.9 WebLogic: DesHome=c:\\bea\weblogic90 (note server home directory is in weblogic subdirectory) Set the type variable in the adfinstaller.properties file to specify the platform for the application server where the ADF libraries are to be installed. The choices are OC4J/ AS/ TOMCAT/ JBOSS/ WEBLOGIC. For example: type=AS Set the UserHome variable in the adfinstaller.properties file to specify the WebLogic domain for which ADF is being configured. This setting is only used for WebLogic, and ignored for all other platforms. For example: UserHome= c:\\bea\weblogic90\\user_projects\\domains\\adfdomain Shut down all instances of the application server running on the target platform. Run the following command if you only wish to see the version of the ADF Installer: java -jar runinstaller.jar –version Run the following command on the command line prompt: java -jar runinstaller.jar adfinstaller.properties If you used the wizard to install the ADF runtime library, you should use the wizard to delete the library. On the Installation Options page in the wizard, choose the Delete option. If you installed the ADF runtime library manually, you can just manually delete the files from your application server. After you deploy your application, test it to ensure that it runs correctly on the target application server. This section provides some common troubleshooting tips. How to Test Run Your Application "Class Not Found" or "Method Not Found" Errors Application Is Not Using data-sources.xml File on Target Application Server Using jazn-data.xml with the Embedded OC4J Server Error "JBO-30003: The application pool failed to check out an application module due to the following exception" Once you've deployed the application, you can run it from the application server. To test run your application, open a browser window and enter an URL of the following type: For Oracle AS: http://<host>:port/<context root>/<page> For Faces pages: http://<host>:port/<context root>/faces/<page> Problem You get "Class Not Found" or "Method Not Found" errors during runtime. Solution Check that ADF runtime libraries are installed on the target application server, and that the libraries are at the correct version. You can use the ADF Runtime Installer wizard in JDeveloper to check the version of the ADF runtime libraries. To launch the wizard, choose Tools > ADF Runtime Installer > Application_Server_Type. Application_Server_Type is the type of the target application server (for example, WebLogic, JBoss, or standalone OC4J). Problem After deploying and running your application, you find that your application is using the data-sources.xml file that is packaged in the application's EAR file, instead of using the data-sources.xml file on the target application server. You want the application to use the data-sources.xml file on the target application server. Solution When you create your EAR file in JDeveloper, choose not to include the data-sources.xml file. To do this: Choose Tools > Preferences to display the Preferences dialog. Select Deployment on the left side. Deselect Bundle Default data-sources.xml During Deployment. Click OK. Re-create the EAR file. Before redeploying your application, undeploy your old application and ensure that the data-sources.xml file on the target application server contains the appropriate entries needed by your application. If your application uses jazn-data.xml, you should be aware of how the embedded OC4J server uses this file: If the embedded OC4J server finds a jazn-data.xml file in the application's META-INF directory, then the embedded OC4J server will use it. The embedded OC4J server will also set the <workspace> -oc4j-app.xml file to point to this jazn-data.xml file. This enables you to edit the jazn-data.xml file using the Embedded OC4J Server Preferences dialog. If there is no jazn-data.xml file in META-INF, the embedded OC4J server will create a <workspace> -jazn-data.xml file in the workspace root. You would then have to go and edit that file (or use the Embedded OC4J Server Preferences dialog to do so). Problem You get the following error in the error log: 05/11/07 18:12:59.67 10.1.3.0.0 Started 05/11/07 18:13:05.687 id: 10.1.3.0.0 Started 05/11/07 18:13:38.224 id: Servlet error JBO-30003: The application pool (<class_name>) failed to checkout an application module due to the following exception: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: oracle.jbo.JboException, msg=JBO-29000: Unexpected exception caught: oracle.classloader.util.AnnotatedClassFormatError, msg=<classname> (Unsupported major.minor version 49.0) Invalid class: <classname> Loader: webapp5.web.id:0.0.0 Code-Source: /C:/oc4j/j2ee/home/applications/webapp5/webapp5/WEB-INF/classes/ Configuration: WEB-INF/classes/ in C:\oc4j\j2ee\home\applications\webapp5\webapp5\WEB-INF\classes Dependent class: oracle.jbo.common.java2.JDK2ClassLoader Loader: adf.oracle.domain:10.1.3 Code-Source: /C:/oc4j/BC4J/lib/adfm.jar Configuration: <code-source> in /C:/oc4j/j2ee/home/config/server.xml at oracle.jbo.common.ampool.ApplicationPoolImpl.doCheckout(ApplicationPoolImpl.java:1892) Solution A possible cause of this exception is that the application was unable to connect to the database for its data bindings. Check that you have set up the required database connections in your target application server environment, and that the connections are working.
http://www.oracle.com/webapps/online-help/jdeveloper/10.1.3/state/content/navId.4/navSetId._/vtAnchor.CIHFCJJC/vtTopicFile.bcadfdevguide%7Cdeployment_topics~htm/
crawl-002
en
refinedweb
QGL 9 A simple 2D scenegraph with an OpenGL render engine. QGL === QGL is an intentionaly minimal scenegraph for OpenGL. Wikipedia has a good description of what a scene graph is, and how it can be used. QGL provides Transforms, Groups, Viewport and Switch. It also provides Texture, Color, Light, Fog, Quad, Text, Sphere and Mesh leaves. To get started with QGL, you will need Pygame and PyOpenGL installed. In the demos folder, there are several howto.py files. Start with howto_display_an_image.py. New Features in this release: ============================= * ParticleEmmitter Leaf Node ** Backwards incompatible change, leaf nodes are now located in the qgl.scne.state namespace. - Author: Simon Wittber <simonwittber at gmail com> - License: BSD - Platform: Any - Package Index Owner: simonwittber - DOAP record: QGL-9.xml
http://pypi.python.org/pypi/QGL
crawl-002
en
refinedweb
August 1987 Ellington's influence has never been greater. Typed as a "jazz" composer only by circumstance of race, he spent his career chafing at the restrictions of jazz, much as his spiritual descendants are chafing now. His scope was enormous. In addition to ballads even shapelier and riffs even more propulsive than those expected of a swing-era big-band leader, his portfolio included tone poems, ballet suites, concerto-like miniatures for star sidemen, sacred music, topical revues, film scores, and extended jazz works unparalleled until very recently and classifiable only as modern American music. He even wrote a comic opera: Queenie Pie, the trifling but winsome score he was working on for public television at his death, was finally staged last fall in Philadelphia and Washington. The son of working people, who dared to imagine himself in top hat and tails, an experimentalist who courted and won popular acceptance, Ellington was one of America's greatest composers, regardless of idiom. He was also the most quintessentially American, in the way that he effortlessly negotiated the distance between popular culture and the fine arts, the dance floor and the concert hall. IN 1984 the Jazz critic Gary Giddins estimated that fifty hours of Ellington recordings had been released in the ten years following his death. More have been released since then, including long-forgotten studio sessions and concert tapes previously circulated only among private collectors. Invaluable as much of this material has proved to be, it is ironic that it has generally been easier to come by than Ellington's most acclaimed work--the epochal sides he recorded for RCA Victor in the early 1940s, for years available only on French import or by mail order from the Smithsonian Institution. The release of Duke Ellington: The Blanton-Webster Band (RCA Bluebird 5659-1-RB29, also available on cassette and compact disc) late last year indicates that RCA is finally beginning to realize what treasures lie in its vaults. The four-record set collects the Ellington Orchestra's entire commercially recorded output from March of 1940 to July of 1942--arguably Ellington's most fertile period, though most of his large-scale works, beginning with Black, Brown, and Beige, were still to come. By 1940 most of Ellington's sidemen had been with him a decade or more, and he had been so important in shaping their sensibilities that he could almost predict the content of their improvisations. This familiarity enabled him to take daredevil risks as a composer and arranger. The two newcomers alluded to in the collection's title perhaps stimulated him even more. The bassist Jimmy Blanton, the first jazz virtuoso on his instrument, gave the orchestra's syncopations more bite, and opened Ellington up as a pianist. The arrival of Ben Webster, the orchestra's first tenor-saxophone star, gave Ellington another crack soloist to call on, as well as another color for his palette. (He acquired still another, more exotic one in late 1940, when Ray Nance, who doubled on violin, joined the trumpet section.) The detail of Ellington's writing and the individuality of his soloists are always astonishing, no matter how many times one has heard the tracks on The Blanton-Webster Band. "Concerto for Cootie," with its beautifully integrated theme and improvisational variations by the trumpeter Cootie Williams, is a masterpiece. In "Jack the Bear" the dialogues between Blanton and the ensemble are thrilling. "Ko-Ko" offers intimations of modality and minimalism. The delight of "Cotton Tail" lies in its breakneck Webster choruses and intricate sax-section harmonizations. And the layered countermelody of "I've Got It Bad (And That Ain't Good)" shows off the inspired conjugation of Johnny Hodges's alto and Ivie Anderson's voice (as perfect a union as that between Lester Young and Billie Holiday). "Main Stem" and "Harlem Air Shaft" are among the tracks that show Ellington's ability to experiment even within the confines of the blues and the thirty-two-bar song format. The numerous versions of pop hits of the period show Ellington's powers of transformation, and Herb Jeffries's ungainly warbling proves that even Ellington was human (with the exception of Ivie Anderson, he never employed a first-rate singer on a regular basis). This was also the period in which Billy Strayhorn, Ellington's protege, blossomed into an influential composer and orchestrator under his mentor's watchful eye. Strayhorn's "Raincheck" and "Johnny Come Lately" anticipate bebop phraseology, and his "Take the 'A' Train," which ultimately became the Ellington Orchestra's signature tune, cleverly underlines the band's playful swank. But the most evocative Strayhorn piece here is "Chelsea Bridge," with its lordly Webster solo; as a successful jazz appropriation of Ravel and Debussy, this remains unsurpassed even by Ellington, a master impressionist in his own right. It is too bad that the producers of The Blanton-Webster Band failed to include Ellington's 1941 duets with Blanton, or the small-group dates from the same period led by Hodges, the clarinetist Barney Bigard, and the trumpeter Rex Stewart, all featuring Ellington on piano. If it is true, as Strayhorn is said to have put it, that Ellington's real instrument was the orchestra, it is equally true that the piano became an orchestra at his urging. (Money Jungle; on Blue Note BT-85129, a bristling encounter with the modernists Charles Mingus and Max Roach, recorded in 1962 and reissued last year with other material, provides a good long look at Ellington the dissonant stride pianist.) Although the vintage performances on The Blanton-Webster Band have been digitally remastered, the sound is not as vivid as on the French reissues, nor is the pitch as accurate. If RCA finally intends to do justice to its Ellington catalogue, its job is far from over--the Ellington Orchestra recorded masterpieces for the label before 1940 and after 1942. The Blanton-Webster Band is a godsend for those on a tight budget; others are advised to search the specialty shops for French RCA's increasingly difficult-to-find Works of Duke, twenty-four volumes available separately or in five boxed sets. Still, music that is timeless and universal in its appeal belongs in chain stores as well as specialty shops, which is why the reappearance of this material on a well-distributed domestic label is so welcome. In the years since Ellington's death, iconoclastic performers associated with the jazz avant-garde have recorded albums of his compositions. They have brought their own agendas to his music, which has proved more malleable than anyone might have imagined (although there is some justice in the complaints of those who insist that to play Ellington means playing him his way). The most striking of these revisionist homages are the flutist James Newton's African Flower (Blue Note BT-85109), the pianist Ran Blake's Duke Dreams (Soul Note SN-1027, distributed by Polygram Special Imports), and the World Saxophone Quartet's World Saxophone Quartet Plays Duke Ellington (Nonesuch 79137-1). This album is both the most recent and the most satisfying in terms of fealty to Ellington's tempos and the ineluctable rightness of its deviations from text. THAT so many performers who are generally adamant about playing only their own material have chosen to interpret Ellington is eloquent testimony to his inexhaustible influence, as are the Ellingtonian flourishes (sometimes filtered through his disciple Charles Mingus) that pervade the ensembles of John Carter, Abdullah Ibrahim, David Murray, and Henry Threadgill. But when I call Ellington the key figure of the decade, it's not just because musicians continue to play his tunes or to aspire to his orchestral majesty. Musicians from all stylistic camps have long done that much, and Mercer Ellington has done an admirable job of keeping his father's music in circulation, on albums like the new Digital Duke (GR1038). Nor is it just because the plungered, speech-like brass styles like the ones that Bubber Miley, Cootie Williams, Rex Stewart, and Joe Nanton patented during their years with Ellington are again all the rage, thanks to the trombonist Craig Harris and the trumpeters Lester Bowie and Olu Dara. Nor is it because Anthony Davis and many other black composers are consciously, and in some instances programmatically, giving musical expression to the goals and frustrations of black society, just as Ellington did with such mural-like works as Harlem, The Deep South Suite, and Black, Brown, and Beige. What is most significant is that today's visionary jazz composers are taking up Ellington's unfinished task of integrating composition and improvisation. Jazz is thought of as extemporaneous and fleeting--that is another part of its romance--but these composers, in setting pen to paper, are aiming for a perpetuity like that which Ellington achieved. They are realizing larger works, as he did, and not worrying whether the results strike everyone as jazz. Of course, not everyone who listens to jazz is as sanguine about this development as I am. Some fear that jazz is recklessly heading for the same dead end that classical music arrived at earlier in this century, with atonality and serialism. And it is true that the composers I am speaking of as Ellington's heirs lack his common touch, his willingness to play the role of entertainer, and will probably never find themselves in a position to develop this commendable trait. Jazz has experienced growing pains since Ellington's time, and the innocent idea of entertainment has been forced to submit to the cynical science of demographics (it's a question not of what is entertaining but of how many and whom it entertains). In its maturity--some would say its dotage--jazz has become an art music, and because a reconciliation with pop seems out of the question, a rapprochement with classical music is probably the key to its survival. Ellington gives these contemporary composers much to strive for, but his mass appeal is sadly out of their reach. A larger audience should be part of what a composer dares to hope for when he starts thinking big.
http://www.theatlantic.com/unbound/jazz/dlgscale.htm
crawl-002
en
refinedweb
With all eyes on the Internet, it is sometimes easy to forget that our security concerns don't end at the Web server. We focus primarily on our firewalls, DMZ configurations, and on hardening the systems that support our web presence and other Internet-based services. This only makes sense. There are countless eyes from around the globe drawing a bead on our systems and looking for the slightest chink in our armor. And with attackers whose motives range from corporate espionage to getting one's cracker handle posted on Attrition.org, we are sometimes forced to use all of our resources just to ensure that our sites are protected. However, the Internet is not the only front that we need to defend; we need to consider the security of our internal systems as well. In addition to our SQL data warehouses, which hold the company's Crown Jewels, we also deploy intranet web, file, and print servers that house much, if not all, of the intellectual property of the company. Given the importance of this data, do you know what permissions are most commonly set on these files and directories? Yes, you in the back? Exactly. All together now: "Full Control." And let's see a show of hands from all of you that have your SQLServer service running as Administrator. Oh, that's not good. One last question: there is no doubt that you have all secured your IIS5 Internet servers from the latest '.printer' overflow issue, but who has gone through the countless internal boxes that have IIS5 running on them? One, two. Two. Two of you have. Now the real question is - "why?" The more cynical of us would blame this laxness on the competency of the administrators, spouting things like "zero knowledge administration breeds zero knowledge administrators," and "he only got that job because he knew how to change the toner in the copier." While these statements may be valid in many cases, I don't accept them as status quo. To be honest, I think that although many of us know better, we don't lock everything down because it is too damn hard to administer. This was certainly the case with NT. If you only had a couple of servers, then it was no big deal. But if you managed an enterprise-wide deployment of NT boxes in different physical locations, it got to be a bear. A great big bear with nasty teeth and sharp claws. That smelled. Bad! In Windows 2000, this has all changed. At the risk of receiving some "what are you smoking?" emails, I will go so far as to say that not only is administering a Win2k enterprise easy, but it is fun too! And that is what this series of articles is all about: securing your Win2k enterprise. Note the word 'enterprise'; that means not just servers, but workstations too! I have seen many documents telling me what settings to use on a server to "harden" it in Win2k, but that is only part of the battle. Properly controlling what the workstations can do and, more importantly, what they can't do is crucial to the concept of a secure environment. This series won't just talk about WHAT to configure, but HOW to configure it. Since a strong foundation makes for a strong structure, let's begin with a little review. The Golden Years of NT The NT mold is hard to break out of. While a "from scratch" blueprint of an NT network could be made functional, fast, and efficient, the state of real-world networks was almost always the result of functionality-first domain structures coupled with other pragmatic considerations, such as: departmental workgroup/domain configurations, uncontrolled growth, departmental cutbacks, and other organizational changes. Throw in a couple of outsourced resource domains, remote facilities connected by who-knows-what-and-where with everyone logged in as Administrator, along with a good healthy dose of Windows 9x, and you had an organizational nightmare. The best efforts to secure an environment like this inevitably failed to hit the bull's-eye, if it hit the target at all. Some admins used Resource Kit solutions, others rolled their own Perl scripts, and some didn't even care to light up. RAS servers lurked in dark corners, PC Anywhere waited on web servers, bosses demanded to be 'administrators' in case we got canned, and "Full Trust" relationships ran rampant. And this was on a good day! Policies helped a bit, but they only went so far. And replication to all the DCs could be a pain, particularly to other domains. We could get a bit more granular with other system settings by rolling out a custom version of IE with the IEAK, but keeping up with what-did-what-to-whom was a daunting task. And if that all weren't enough, the "once a controller, always a controller" role of our servers made moving things around difficult, if not impossible. Of all the third party products designed to relieve the burden of the administrator, the only one that really helped was one called Jaegermeister. A New Dawn Then one day, it happened. I walked into my office, and there on my desk was a shiny new copy of Windows 2000 Server. I ripped into it as if it were a WonkaBar, and I was Charlie looking for the last Golden Ticket! I rushed over to the first available server, performed an upgrade, and eagerly began exploring. I immediately mastered Active Directory, Site configuration, Organizational Units, Security Policies, Group Policies, and the Security Configuration and Analysis MMC plug-in. Just kidding... In actuality, I think I shrugged and went to lunch. But these were all new capabilities of Win2k that I would come to love, and that would offer systems adminstrators a clear path to easy enterprise administration. In the next few sections, I will offer an overview of these tools, jumping back and forth between the NT way and Win2k way to provide a basis for the recommendations to follow. Active Directory Although an in-depth overview of Active Directory is outside of the scope of this series, we do need to go over a few things. In its simplest definition, Active Directory is a database of all of the network objects. Whereas NT was limited to 40,000 group, user or computer objects per domain, Win 2000's Active Directory supports 1 million objects (or a size of 17 terabytes) and can include many other types of objects like printers, Organizational Units, Shares, Applications, etc. Breaking NT into manageable units often required creating a separate domain, hence the different types of domain models you could choose from: Single, Single-Master, Multi-Master, and Full Trust. Active Directory gives users a context in which to build domains called a forest. Within the forest, you can build flat, or contiguous, namespace domains (kind of like the DNS namespace) or different domain namespaces with trust relationships between them. So, whereas in NT you might have had a 'Poindexter' domain for corporate, and a 'Scrubs' domain for some guys in the field, Active Directory lets you create the 'Poindexter.Com' and 'Scrubs.Poindexter.Com' domain space. A new domain object called an Organizational Unit, or OU, will let you create a single domain namespace, such as 'Poindexter.Com', and manage what used to be NT domain units within the context of the OU. Of course, if circumstances dictate, you could have multiple contiguous namespaces trusting other multiple contiguous namespaces, each with multiple OU's that contain other OU's (however, we won't be using those in our example.) And as far as trusts go, Active Directory introduces transitive trust logic. In NT, if the Poindexter domain trusted the Scrubs domain, and the Scrubs domain trusted the Peanut domain, that did not mean that Poindexter trusted Peanut - this tertiary trust had to be created manually. A transitive trust solves this for you. Where the trust is transitive, Poindexter will indeed Peanut if Poindexter trusts Scrubs and Scrubs trusts Peanut. Finally, with Active Directory, you have the option of initially installing a box as a Member Server and later configuring Active Directory to make it a domain controller. If you want to, you can then go back and uninstall AD to demote the box to Member Server status again (as long as someone else is still a controller.) This affords you some powerful options as you migrate to a new domain structure. The Domain Structure I know I shouldn't introduce Domains and then talk about Organizational Units, but I'm going to anyway, if only because the OU might save you some headaches. If you had multiple domains in NT, it might make you automatically create a sub-domain structure in AD. Moving from Poindexter, Scrubs, and Peanut might make you think that configuring Poindexter.Com, Scrubs.Poindexter.Com, and Peanut.Poindexter.Com is a good idea. But before we jump to that conclusion, lets take a look at what Organizational Units can do for you. An OU is an Active Directory container into which you can place objects. In NT, we may have created domains based on geographic location, department, or some other classification. Regardless of how we chose to configure the domain, it was all done in order to ease administration. Though some domains never even came close to 40,000 objects, they may have been broken up so that the user account operators did not have to wade through 5,000 objects to find what they were looking for. For instance, a 'Sales' domain may be created to house the accounts of 1000 sales people, and a 'Support' domain may have been created to house the accounts of 4000 other personnel. In Active Directory, we can accomplish the same thing in a single domain with an Organizational Unit. For example, in the Poindexter.Com domain, we could house all 5000 users, but create a 'Sales' OU and a 'Support' OU and then move the users into their respective groups. The same could be done with Scrubs. In fact, we could create different OUs to group objects by many different criteria. If you wanted to, you could create a 'Systems' OU, and create OU's within Systems for 'WebServers', 'PrintServers', 'MailServers', 'ScrubWorkstations', 'AdminWorkstations', etc, and move the different systems into these different OU's. As we will see shortly, this will allow us to use some powerful administrative tools to automatically configure different options on the different OUs based on the object type. The methodology you use to create OUs is completely up to you. As I have said before, we sometimes setup NT domains in order to separate different physical locations for ease of administration. Active Directory does this for us in the configuration of Sites. Since TCP/IP is required for AD, and since our sites will (or, at least, should) have different subnets, we can effectively use the Sites configuration to tell AD which controllers are in which physical locations. This makes replication more efficient. It also increases the speed and availably of the logon process, as clients will already know which controller to authenticate to based on the subnet. That is not to say that all objects in a site necessarily have to belong to the same domain, because they don't. It is just a typical configuration. Now that we have covered that, you can go on to the process of determining exactly how you will create your domain structure. You can still create sub-domains if you want to, but I prefer the ease of a single domain namespace, particularly since I can configure my sites for optimum performance, as well as group my boxes and users into different Organizational Units (note that boxes in different sites can still be in the same Organizational Unit!) Security Policies This is where it starts to get good. Similar to our old policy options in NT, the Security Policy plug-in allows us to automatically impose configuration parameters upon our systems. The default scope that we can work with is Local System, Domain Controllers, and Domain, but we will look at how to expand this using Group Policies. Account policies (like password age, lockout options, complexity), Audit policies, User Rights, and Security Options (like RestrictAnonymous, LM Authentication settings, and Secure Channel setups) can all be set using this tool. In the next article in this series, we will get into painful detail of all the options you can set to secure your systems, as well as why we are recommending them. Group Policies Group Policies are amazing. With them, you can control almost every aspect of the user's configuration. To me, this is extremely important. There are lots of exploits out there that target servers, but there are even more that target the end user: malicious script on a web site that is counting on Active Scripting to be turned on, Email attachments that are hoping that users read their mail in the Internet Zone, and credential-grabbing methods that are hoping that certain ActiveX controls are marked safe for scripting. Each zone, and every option within that zone, can be controlled using Group Policies. Don't want your users changing the proxy settings? Turn it off. Don't want them sharing their desktop in NetMeeting? Lock it down. Want a cool spinning logo in a custom browser? Plug it in. Toolbars, Certificates, MMC Snap-ins, Task Scheduler, you name it, you've got it in Group Policies. What makes Group Policies really strong is that you can impose them at any level you want. You can set up a policy for the entire domain, for a site, for an individual organizational unit, or for any combination of the above. Put your "trouble" users in lockdown. Give your favorite users unlimited access. Change your boss's default home page. The possibilities are endless! As it will with the Security Policies, this series of articles will go into intimate detail of all the various settings you have control over in the Group Polices (a topic that might take up an article all by itself!) And, as the Security Policies are provided as a sub-set of tools in the Group Policies, you will see how combining the two into policies-by-unit will allow you to secure your servers while you control your users at the same time. And as if that were not enough, the Group Policy extensions even allow you to set permissions on directories, configure services, and automatically control group membership. This is extreme management. Security Configuration and Analysis If we survive the detailed exploration of Security and Group Polices, we will conclude this series with a look at the Security Configuration and Analysis, or SCA. I used to be in a group called the SCA, The Society for Creative Anachronism, in which members beat on each other with fake swords. Windows 2000's SCA is very much the same, except we will be beating on our boxes, and the swords will be real. Security Configuration and Analysis allows us to compare our boxes against a security template, and to push out configuration changes where necessary. This is particularly helpful for audits and 'what if' scenarios where you might want to get a view of the current state of affairs without actually changing anything. Although this is a box-by-box analysis, command line tools are available for scripting, as well as a complete API for the truly brave of heart. Combine this with the Security/Group Policies in a properly configured domain structure, and you will soon find yourself wielding a mighty weapon in the most heated battle of IT: security. So, brush up on your Active Directory, get some systems in place to mess with, fix a pot of coffee, and get ready for the next installment in this series, when we will jump head first into the pool of Security Policies. This is where we leave NT behind! Remember, if you want to discover new oceans, you must first lose sight of the shore. To read Hardening Windows 2000 in the Enterprise: Seeing the Forest in Spite of the Trees, Part Two, click here. Privacy StatementCopyright 2008, SecurityFocus
http://www.securityfocus.com/infocus/1296
crawl-002
en
refinedweb
Object | +-Video public class Video extends Object The Video class enables you to display video content that is embedded in your SWF file, stored locally on the host device, or streamed in from a remote location. Note: The player for Flash Lite 2.0 handles video differently than Flash Player 7 does. These are the major differences: Because of the requirements of mobile devices (smaller processor speeds, memory restrictions, and proprietary encoding formats), Flash Lite 2.0 cannot render the video information directly. The supported file formats for video depend on the mobile device manufacturer. For more information about supported video formats, check the hardware platforms on which you plan to deploy your application. Flash Lite 2.0 does not support the following Flash Player 7 features: Properties inherited from class Object Methods inherited from class Object Flash CS3
http://www.adobe.com/livedocs/flash/9.0/main/00005549.html
crawl-002
en
refinedweb
Copyright © 2009 3.1 Normative Material 3.2 Compliance 4 Usage scenarios and Requirements 4.1 Mobile Code Signing Scenario 4.2 Mobile Code Signing Requirements 5 Signature Properties 5.1 Profile Property 5.1.1 Generation 5.1.2 Validation 5.2 Role Property 5.2.1 Generation 5.2.2 Validation 5.3 Expires Property 5.3.1 Generation 5.3.2 Validation 5.4 ReplayProtect Property 5.4.1 Generation 5.4.2 Validation 6 Acknowledgments 7 References The SignatureProperties element defined by XML Signature [XMLDSIG2nd]. No provision is made for an explicit version number in this syntax. If a future version is needed, it will use a different namespace. The XML namespace [XML.] Expires Property is intended to enable use cases where the signature is intended to expire. <element name="Expires" type="dsp:ExpiresType"/> <xsd:complexType <xsd:extension </xsd:extension> </xsd:complexType> Expiration times MUST be given as timezoned values. (See section 3.2.7 of [XML Schema part 2].) This property MUST NOT occur more than once for a given signature. Upon Signature generation, if this property is used, the time value is set to a reference time, as defined by the application. The value of the time does not need to be from a trusted timestamp authority. The time value needs only be accurate enough for comparison, as required by the application usage. typically be reason for applications to deem a signature invalid with respect to the reference time. Profiles MUST specify what reference time should be used when interpreting this property. An expiry property with an untimezoned time value MUST NOT be considered valid. If multiple instances of this property are found on a single signature, then applications MUST NOT deem any of these properties valid. To prevent against inappropriate reuse of the signature after its intended use, a replay nonce may be provided. This is a random value that should not be repeated, allowing the verifier to determine that the signature has already been seen. In order to avoid the need to retain nonce values indefinitely, a timestamp is included, indicating that all signatures before that time should be ignored. This property may be used in applications where the signature is used to secure a message or other applications where it should not be reused. <element name="ReplayProtect" type="dsp:ReplayProtectType"/> <xsd:complexType <xsd:sequence> <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> <xsd:complexType <xsd:extension <xsd:attribute </xsd:extension> </xsd:complexType> Timestamp values MUST be timezoned. Upon Signature generation, if this property is used, the nonce value MUST be set to a previously unused random value and the timestamp MUST be set to a time before which the signature is determined to no longer be valid (and for which nonces need not be maintained). If timestamp values are untimezoned, validation fails. Validation succeeds when the relying party is able to determine that the nonce in the property has not been seen before and the current time is after the timestamp recorded in the ReplayProtect property. Otherwise validation fails. Behavior of applications when an invalid property is encountered is application-specific. Thanks to Mark Priestley, Vodafone, and Marcos Caceres, Opera Software, of the W3C Web Applications Working Group for requirements discussions related to widget signing. Acknowledgements to XML Security WG members TBD.
http://www.w3.org/TR/2009/WD-xmldsig-properties-20090226/
CC-MAIN-2017-04
en
refinedweb
Namespace registration Updated: December 3, 2008 Applies To: Windows Server 2008 R2 When a namespace is created, the content provider assigns the data (that will be sent by way of a multicast transmission) to the namespace. The transmission cannot start until the content provider has assigned the data. Events Related Management Information WDS Multicast Content Provider Windows Deployment Services Show:
https://technet.microsoft.com/en-us/library/dd353595(v=ws.10).aspx
CC-MAIN-2017-04
en
refinedweb
>Start up nocturne after buying it from PSN >Witness the end of the world, cool so far >Get into fights >Level up a couple times around Dr. Dark >Get attacked by a Preta >Critical >Critical >The comfort of death will come to both humans and demons alike >Uninstall Fuck this shit. >Boohoo Demons are not being fair mommy Christ. Stop breathing while you're at it. >>283763064 What did you expect? No death run in a hardcore jrpg? Also >dying at Pretas Good thing you unistalled it right away, you would break your console on Matador fight. >>283763064 lol that's pathetic >>283763064 Way to get wrecked by piss easy demons. >>283763064 How can you be that bad at video games? Preta is even before the easy as fuck first boss >>283763784 I just beat that guy and I gotta say. It only took me 2 tries. Once I knew what he did I could make a party that had my odds a bit better. You need more than buffs and debuffs, but its not that hard. >>283763064 They did give you healing items. It's not that hard. >>283763064 Wait till they start casting death spells at you, or charm your whole party. You are gonna cry. >>283764423 The Pixie can heal you too and there's doctors who heal you for free every 5 minutes in the starting hospital, I think OP is just retarded >>283764636 I didn't even have demons yet. It was just me and my only options were attack and pass. >>283763064 >spent money and deleted the game like a bitch >>283763064 >grinding in the fucking tutorial zone Why? >>283763784 To be fair, the very first fight on hard mode in the tutorial can easily kill if Demiqua gets crit. It's actually harder than Matador because you can easily get through him with smart fusion and buffs. >>283764327 He's just a casual filter that pretty much forces you to understand that you need to use the de/buffs and possibly fuse the right demons for encounters. >>283765527 >Why? Not him but why not? >>283765527 >>grinding in the fucking tutorial zone >Why? To avoid bullshit deaths like the one I just experienced. I have terrible luck with RNG in JRPG's and this is no different it seems. I don't think turn based games are for me anymore. >>283765632 Because if you really, really want to grind you might as well do it later when you have multiple demons to get skills for as fusion fodder anyway. It's a god damn waste of time doing it in the tutorial zone. >>283765228 Go to the second floor dumbass >>283765693 >avoiding bullshit deaths >by deliberately subjecting yourself to more chances for multiple demons to ambush you while you're by yourself Did you think this through? >>283765697 Well it's a turn based RPG, at the end of the day the more numbers on your side the better regardless of where you're at. >>283765767 >Did you think this through? No because it's my first time playing the game. Nocturne is one of the easiest SMTs Also you can't grind in these games after a certain level enemies in an area just don't give enough xp If you wanted to be a dumbass and grind at the start of an rpg why the fuck wouldn't you do it right next to a save point >>283765228 Nocturne experts correct me if I'm wrong but I'm pretty sure you don't even encounter Preta until after you get pixie You're just funposting m8 >>283765792 Not really since it's a waste of time grinding in the hospital since nothing gives you experience so you end up staying way longer than what you need and you don't need to grind to beat the first boss >>283765892 Tutorial zone >>283763064 The starting fights against the pretas are basically luck based, yeah. It's one of the few bullshit parts in the game. You wanna put your first few points into endurance so you have enough health to survive getting ganked. >Level up a couple times around Dr. Dark >grinding in SMT games I take it back, fuck off >>283765892 I'm not fucking lying, dude. It was right after the healer, Dr. Dark or whatever. I ran up towards the door right ahead of him, got into a fight with a Preta and died. Fuck you. >>283765963 >since nothing gives you experience Since when? I don't remember there being a point where you couldn't get EXP in that game. >>283765827 Well if you want to try again, you don't need to grind, just try to explore, get pixie, and find a save point, game is filled with them so you shouldn't have an issue. ITT: List SMT games better than Nocturne to roast the babies Strange journey DDS Raidou If Persona 1 Demikids >>283765767 In any RPG where you get more than 1 party member, you want to get them as soon as possible. So you have a better shot at killing things, and not dieing to the first enemies in the game >>283766129 I think he's saying the EXP is pitiful so there's no reason to bother. >>283766129 Sure, if you want to grind out like 5 levels on things that give you like 8 EXP tops, go for it. >>283766129 More like getting 3 experience per battle in the starting zone is a waste of time, it's like grinding against level 2 pidgeys You get experience but it took you a million years to get experience when you could've just moved on and gotten more experience in a shorter amount of time by progressing the game >>283766170 You've never played If >>283765585 de/buffs barely even help with that fight though. The thing that kills you is damn near EVERYTHING you would've picked up is weak to force, so he teaches you about press turns. You playing on hard, OP? I did, it sucked shit to try to get to a save after grinding/recruiting a cool demon only to get gang banged by a surprise attack. Not to mention escape is almost impossible without an item on hard. It also reminds me that I died against the dragon (very first fight) in ff mystic quest when you only had the option to attack. >>283766271 >>283766283 >>283766291 I guess I can see where you're coming from. I suppose I just like the sense of security that comes from having significantly higher stats than a boss at the time. >>283766376 I did and it was great >>283766596 The game gives you a demon for free who has Zio that the boss is weak to and you can recruit a Shikigami as well since he has Zio as well to make the boss a cakewalk, you don't need to grind to beat the first damn boss >>283766758 Well I know you don't need to grind, it's grinding after all, when the hell do you NEED to do it? It's just insurance. >>283766596 Oh man. LIke 1 ST or 1 Vit is gonna make so much difference. >>283765870 All SMT games are easy. >grinding >smt >>283766834 That's why I said SIGNIFICANTLY higher stats Is it true Pixie becomes god tier at high level or just a meme? >>283766962 One level is not gonna do it. A good way to get more stats is to get another character who has more stats >>283767045 >One level is not gonna do it. Of course not, nobody grinds for just one level though. >>283767000 it's a meme, just release her :^) >>283767000 No, just let her leave when you get to the park. It's a meme to keep her >>283767142 >not fusing her into a fat ugly demon I always get a hardon from that 30 hours into SMT IV and I feel like I haven't even scratched the surface of the game yet. Just let the demon at the government building go free because he was a cool dude Beating this motherfucker was a glorious feeling after failing 20 times. Should I restart my file on normal? I tried beating Matador on hard at level 16 with all the buffs and debuffs available to me. Only Nozuchi had a zan null at that level. Shit still wasn't debuff since he'd clean any debuffs on him and wipe out everything with his physical skill. Finally said fuck it and grinded for 2 levels for the zan repelling demons on top using all of my macca to but the force nullifying magatama. Then I steam rolled him. I thought I would stabilize after that, but even the random encounters come off as tedious. >>283767403 Random encounters are usually harder than the bosses in every SMT game >>283767000 She got the last hit (reg attack) on lucifer on my run i'm sure that must have been embarrassing for him >>283767403 >I thought I would stabilize after that, but even the random encounters come off as tedious. This is why you should stop. It's a very tedious series. >>283767286 I fused her into daisoujou. >>283767385 Fuck nintoddlers, you ruined every smt game You are 20 years late to rhe bandwagon and you started with the worst in the series >>283767604 lel >>283767506 I dunno man, if I'm gonna go down I'd like to go down from Pixie touching me. >>283767604 >trying to bring up console wars for no reason Why >>283767385 >failing 20 times >>283767403 Honestly it sounds like you need to get good. I beat him at level 14-15 then had trouble with thor, where I then grinded to level 18 for that guy with no arms that had elec absorb/resist I forget which. Then proceeded to have no problems with the game. Granted that was since my Makami learned wind cutter so I didn't get stuck in the park >>283767000 I'm gonna be nice guy and tell you that if you keep her or her descendants, (which are always the top of the stock) to a certain door in the Amala Labrynth, it'll revert back to pixie with all stats at 30 and level 80 >>283767554 Nigga, I'm not OP. I've played 7 othersgame in the series and enjoyed all of them thoroughly. >>283766170 None of these are even close except for SJ. >>283767693 I wasn't that many, but my fight against him was slamming my head into a wall since I refused to grind. Medusa is harder anyway >>283767671 different anon I remember being pretty mad when it was announced for 3ds then confirmed 2d I wanted 3d models during battles but they did a pretty good fucking job animating the effects on the sprites >>283767000 She's useful but people majorly overate how useful the upgraded pixie is honestly, it's probably better to terf her for the free magatama and then just recruit another pixie >>283767776 Medusa was easier for me. At least you don't have walter using agi and fucking you over I don't like having to skip through Charon just to get back to my save game. The option is nice and all but my save points are a lot better >>283767776 >Medusa is harder anyway def up, atk down and she's free >>283765870 No turn based game is really "hard" to be fair. Does anyone else not care about experience but just grind for money even when you don't need it? It feels uncomfortable if my Macca goes below 6 digets >>283768074 I remember paying $18+ in change for pizza when I was a kid no tip >>283767926 Even with walter being a retard, minotaur didn't feel that bad, it's just that something would happen, minotaur smirked and that's a wipe. >>283767991 >grinding for spells All I had was sukukaja/sukunda if I recall. Even with her at -3 and +3, she hit 88/96 of that gun move that hit all. If she decided to use it twice in her turn, that was a wipe. At least minotaur had to smirk when he decided to win. I was like level 12 or something at medusa or something if I recall. Which was apparently underleveled. >>283768303 >fighting a boss without having at least one demon with a buff attack or reflect physical damage spell That makes almost every boss a pushover >>283768303 >sukukaja/sukunda I tried that against her for the first time realized pretty quickly they were useless in the game >full stack, misses once every 10 turns >>283768456 >reflect physical damage >at medusa You're fucking retarded. Even if you did, the most you could use it is once. She was the hardest boss in the game for me. I can see why people would argue minotaur though. >>283768558 Which is why I said or you idiot, not and >>283763064 >didn't even meet Dante from the Devil May Cry series pleb >>283766383 he uses red capote to max his agility at the beginning you need to debuff his agility because you can't hit him otherwise, retard. debuffs are everything in the matador fight I forget, is SMT3 canon in the DMC series or was it just one of dantes side adventures >>283768828 Dante was just kind of an easter egg character, I don't think the actual canons are connected at all. >>283768060 >what is wizardry 4 >>283767604 You do know that MegaTen1&2, SMT I, II and If... and Strange Journey came out first in nintendo consoles right? The only main MegaTen game that came out first in a PS console was Nocturne, so nintoddlers as you say were the one's that let the franchise even get enough games to reach Nocturne. Of course you knew this though, since your'e obviously baiting I'm playing DDS and loving it so far, it's my first SMT game. Where should I go from here? >>283768743 You can hit him just fine. >>283769054 the other half of DDS >>283769134 AAAAHAHAHAAHAHAAHAAHA >>283768913 It always made me sad, because that end of world scenario at the very beginning of the game is the kind of end game scenario that Dante stops. Why didn't he stop it? Why didn't he save the world? >>283769054 Finish DDS, then play DDS2, then try Nocturne for a real mainline game. Or try Persona if you want something more story driven like DDS. How is DDS2 anyway I know it's a direct continuation of the first game but the encounter rate drove me fucking crazy and I never finished it Hello everyone how are you doing? >>283769184 I used sukukaja twice just to be safe and I never missed. I doubt I needed them outside of stuff like lunge. >quit after dying once Kill yourself >>283769370 The encounter rate was reduced in DDS2. >>283769480 wow I don't remember having any problems with the rate in DDS probably just didn't feel like putting up with that shit at the time might give it another try pretty cheap on amazon >>283769372 fuck off asshole >>283769372 I love you and must have you in every game. I have the same crush on Metatron though. I must have you both. >>283769781 >liking betatron >>283769903 He just looks so cool and inspires awe and loyalty. Also cool ass theme. >>283763064 Its funny because I bought this on PSN today as well >Start up Nocturne after buying it from PSN >notice that sometimes the game's controls feel lightly slower or something and worry it might be like the DDS PSN release >ohwell.jpg >wonder how the fuck my friend had a keycard to the basement >Witness the end of the world, cool so far >digging the story even more than Digital Devil Saga >Level up a couple of times >Meet Pixie >Love that Pixie acts like an actual character >see closeup of her in battle >DAMN she's hot >Go battle against Preta >Critical >Fuck their shit >run into soul >"If you beat the boss up ahead I'll give you all my money" >fight first boss >fuck its shit after having two demons that know Zio attacks and getting a bunch of criticals on it >Go back to the soul >"H-Here you go..." >Save >Quit And that's how I gained a desire to see porn of Pixie. ... OP were you playing on hard? Shit's easy. And heck, criticals shouldn't have been enough to wipe you out anyways. Did you not increase vitality? >>283770050 Betatron is the patron angel of passive-aggressive internet messages. >>283770418 Here you go. to be fair the entire opening area in Nocturne is completely RNG on hard mode; missing an attack or getting critted basically guarantees a game over unless this is normal mode then you just suck >>283770847 >view mode I didn't know that was there. Thank you anon. This will come in handy.
http://4archive.org/board/v/thread/283763064
CC-MAIN-2017-04
en
refinedweb
There's a folder of js/collections/contact.js and it has only ContactManager.Collections.Contacts = Backbone.Collection.extend({ model: ContactManager.Models.Contact }); It only creates a new collection type and demonstrate how to encapsulate each component of an app. This project uses the global object ContactManager as a kind of namespace for the app. The collection is used here: var contacts = new ContactManager.Collections.Contacts(data.contacts), And is equivalent to: var contacts = new Backbone.Collection(data.contacts, { model: ContactManager.Models.Contact }); Which mean each object inside data.contacts is made into a ContactManager.Models.Contact model object. Additional documentation: .extend(...)
https://codedump.io/share/HDSKgHY51i2J/1/backbone-confusion-on-collections-and-model
CC-MAIN-2017-04
en
refinedweb
Parent and student participation is encouraged and mandated under the sixth principle of IDEA. Founded in 1920 as the Association Internationale des Conferences de Psychotechnique (International Association for Psychotechnology), the IAAP adopted its present name in 1955. The Construction of Gender Gender remains a socially constructed concept that refers to the creation and explanation of the differences between girls and boys and women and men.Tran Free binary options trading signals software. (1967). Metabolic, Pediatric and Systemic Ophthalmology 191319, 1998. American Psychologist, this perspective on organizational climate has recently renewed the orig- inal social climate theme of group or team climates. Psychiatry. Medical Consequences Obesity is strongly associated with diabetes mellitus; close to 50 of all individuals with type 2 diabetes have a BMI of at least 30 kgm2. 748 2.Hazlett E. Deciding What Is Just Two major theories have been developed that purport to explain how individuals make justice decisions. Free binary options trading signals software Res. Janelle (Eds. StringWidth(line), ybl); line ""; wordCount 0; x 0; y y fh; } if(x!0) {sp " ";} else {sp "";} line line sp word; x x space w; wordCount; } } drawString(g, line, wordCount, fm. 1 and 0. Once thesequenceISconfirmed, the plasmid can be used in subsequent steps. ACT-R captures many aspects of how people learn and respond to problems. Obviously, the quantizer used around the 3000th sample should not be the same quantizer that was used around the 1000th sample. Loewi identified the chemical that carried the message to speed up heart rate as epi- nephrine (EP).thinking in unique and independent ways) The instrument has been applied in a variety of settings and cultural contexts. Developing talent in young people. Personality, learning. Sim- ilarly, an increase in the reserve requirement (monetary tightness) de- creases the check-writing component of M1. They can do so by pointing to the impact they have had in the past or to the powerful allies they have. Initializing the pixel array is done in a separate method, 1991. If you are writing applications, not applets, you can always use Javas native capabilities (described in Part One) to gain access to native code routines that would be allowed access to such system resources. 8 397. Amisulpride was free binary options trading signals software with lower weight gain than risperidone in a trial comparing the two drugs 125.and Borgomano, Top binary options software. Given that speech is usually housed in the left hemisphere of right-handed patients, visual information presented to the left visual field will be disconnected from verbal associations because the input goes to the right. Responsiveness to endocrine therapy may occur in a variety of ways, including modifica- tion free binary options trading signals software the internal hormonal milieu of the host by surgical adrenalectomy binary options cpa hypophysectomy, the administration of androgens or large doses of estrogens or, during the last two decades, ad- ministration of antiestrogens (cf. One of the labels is Employee Free binary options trading signals software 991 and the other is Phone. As shown in Figure 9. See Also the Following Articles Emotion n InterpersonalBehaviorandCulture n Interpersonal Perception n WorkandFamily,Relationshipbetween Further Reading Berscheid, E. III.Phillips, R. An alternative theory that also takes into binary option demo contest both cognition and sginals is Sternbergs triarchic theory of successful intelligence. A fairly recent addition to the personality literature is the contextual approach, which considers sociocul- tural and environmental influences that may affect how personality develops across the life span. (A) Rapid and slowed nonspeech acoustic signals used as test stimuli and in training. reports delivered instantaneously, and sometimes without professional interpretation, will most certainly force the field of sport psychology to confront what is and is not acceptable practice. Historical Background Learning disability is an umbrella term used for a wide variety of school- related problems. 4 Galerkin finite element method We shall work out the fin problem by using the Galerkin finite element method and dis- cretizing the domain into five linear elements with a total of six nodal points as shown in Figure 3. 0 θ6 1. h" using namespace std; pairlong, long dice() { long a unif(6); long b unif(6); return make_pair(a,b); } The return statement uses the make_pair procedure; make_pair is a convenient mechanism for the creation of ordered pairs. Individual characters have the type char. Human needs are often articulated and fulfilled through important collectivities such as the ethnic group, the national group, and the state. Oxygen and water are two different absorbants which act as donors and acceptors, B. It is likely that the virus causes PML, but predominantly in humans who exhibit signifi- cant immunosuppression associated with lymphomas and other debilitating diseases. The upper half z plane maps to the area inside the curve CQ. The dispersion would be greater in this round trip situation and also there would be a correlation with a click from the local lightning bolt. Fortunately, as de- scribed by Filippi, new imaging techniques are allowing neurologists to distinguish mild damage from severe dam- age. Nurture in schizophrenia the struggle continues. Trade restrictions are advocated by labor and firms in some industries as a protec- tion against foreign competition. For example, a number free binary options trading signals software forms of traditional medicine, including Chinese traditional medicine as well as traditional medical beliefs in India, Latin America, and the Caribbean. Cancer Res. It is important signal use more comprehensive neuropsycho- logical assessments otpions future studies in order to compare neuropsychological profiles between groups of patients within the spectrum and beyond. Unpublished manu- script, the rate of exchange rises. 1 A composite wall with three different layers, as shown in Figure 4. 24-25. The continuous pairing of some stimulus and a reward (or punishment) creates positive (or negative) affect. In I. Fewer effective, evidence-based treatments for constipa- tion and encopresis (i. Memory Anatomical organization for candidate brain regions. Hebben, and E. Pitcher applets. Implementation of interventions that target the specific needs of the students who are referred for evaluation rather than their diagnoses or labels. (2000). Implicit memory is an unconscious, a fortiori the affective psychoses, are defined by reference to schizophrenia - these are diagnoses to be considered when schizophrenia has been excluded.Matthews S. Men trading women who perform correctly state that water is always level.Kong, D. This string is then concatenated as before. Madrid Univ. 2 Peptide neurotransmitters Family Opioids Neurohypophyseals Sлftware Insulins Gastrins Somatostatins Example Enkephaline, dynorphin Vasopressin, oxytocin Gastric inhibitory peptide, growth-hormone-releasing peptide Insulin, insulin growth factors Gastrin, cholecystokinin Pancreatic polypeptides Page 111 110 PART I BACKGROUND from instructions contained in the cells DNA. But once the new networks are established, a transportation accident is one of the most common risks to which any member of society bianry exposed. Other examples of such structural genes for endogenous viruses within the genome of the host have been described in the BALBc strain of mouse (Kozak and Rowe, 1979), as well as another locus on binaryy 1 in at least five different mouse strains (Kozak and Rowe. On the other hand, paranoid and schizoid personalities were not more common among biological relatives of schizophrenics in the Danish Adoption Study. Gender bias in childrens percep- tions of personality traits.Molnar, L. Fourth, they conclude that the study definitively shows that family support and encour- agement is essential, but that the specialized inputs inherent in Applied Family Management are not. 1328 0. Positive Youth Development According to Larson, high rates of boredom, alienation, and disconnection from meaningful challenge are typically not signs of psychopathology but, rather, signs of deficiency of positive youth development. THREE 8. An additional aspect of the hypothesis is that it explains the high incidence of autoimmune disorders (migraines, allergies, asthma, thyroid Page 664 optiгns, ulcerative colitis, and so forth) both among males in general and among males with exceptional signal s. These thoughts are optionns positive toward the product than are message-related thoughts, even otions small, ωi can be forum binary option indonesia, because the factor 2π affects the real and imaginarypartsofthewavephasedifferently. Openness is strongly heritable and generally free binary options trading signals software during adulthood.Bryant, S. Returns the next double random number. Individual authors or their assignees retain rights to their respective contributions; reproduced by permission. For completeness Within an element, R. Heilman and E. Casas, L. Gen.those disabilities that are determined primarily by results of psychometric testing such as learning disability LD, speechlanguage impaired, and emotionally disturbed). Environmental correlates of school vandalism. The anterior part of the corpus callosum is called the genu (the knee), and it contains the fibers from the prefrontal cortex. Ntiles) { return null; } if (ntiles 7) { turn_score 50; } after the first pass, and post- mortem studies of suicide victims supported this hypothesis.the study of the process of dying and death). San Diego Academic Press. Project to the right visual cortex. Benton, A. A final version was unanimously agreed on by both senior management and the board. In agricultural and pastoral societies, in which there is a permanent base and many hands are necessary for free binary options trading signals software of the land, the extended family types are prevalent. Policy Attachment theory has had an impact opitons only on the work free binary options trading signals software researchers optio ns individual clinical practitioners but also on policymakers. Rates of adherence free binary options trading signals software also thought to be particularly low in asymptomatic condi- tions such as hypertension that afflict many older adults.Charney D. const.Hatayama, I. 140. 4 Compliance Issues and the New Antipsychotics Thomas R. The dark material on the opitons side of siignals synapse contains receptors and substances related to receptor function. Does trading binary options work A. 61) and (3. Only 3 of top 699 2004 Elsevier Inc. The trading binary options pdf vector is usually selected randomly; however, for purposes of explanation it is more useful to use a fixed perturbation vector. Again the largest contributor to costs was hospital- ization (55) and then intermediate care facilities, the evolutionary expansion of the cortex tradign to an increase in the number of basic units, binary option trading books as one would add chips to a soft ware to expand its memory or processing speed. 1 INTEGRATING RESULTS Answering the clientssubjects questions as completely as possible. There may, however, be subtle differ- ences in the cerebral representation free binary options trading signals software different languages within the left hemisphere. Journal of Vocational Behavior, 40, 111128. A psy- chological dependence is not biochemical in origin but contains a learning cycle of achieving optimal levels of functioning while intoxicated. However, CA Consulting Psychologists Press. Endogenous hormones as a major factor in human cancer. Psychiat. The next symbol v is the 22nd symbol in the alphabet. Free binary options trading signals software home communications can be arrayed along a conti- nuum of parental involvement, ranging from notes that merely provide information to notes that ask parents to deliver predetermined consequences contingent on the reported student performance. 21) 2L The pinch force is free binary options trading signals software with this interpretation since the inductance of a conductor depends inversely optiгns its radius.Fujino, T.Forex binary options
http://newtimepromo.ru/free-binary-options-trading-signals-software.html
CC-MAIN-2017-04
en
refinedweb
Mission Highlights 1. OVERVIEW 2. ECONOMY 3. FOOD PRODUCTION IN 2001/02 4. FOOD SUPPLY AND DEMAND SITUATION 5. EMERGENCY FOOD REQUIREMENTS 6. LONG-TERM STRATEGY FOR SUSTAINABLE AGRICULTURAL DEVELOPMENT Lesotho faced severe weather variability for the second year in a row, characterized by heavy rainfalls, frost, hailstorms, and tornadoes. The erratic timings of rainfall and frost severely affected crops at planting time and during their critical development stages. Heavy rainfall in October and November delayed or prevented planting of crops in many areas and frost in March curtailed the end of the growing season. The Government of Lesotho, anticipating another poor harvest, declared a state of famine and requested FAO and WFP for assistance in reviewing the country's food situation and outlook for the 2002/03 marketing year. Consequently, an FAO/WFP Crop and Food Supply Assessment Mission was fielded from 25 April to 4 May 2002 to estimate the current season cereal production, assess the overall food supply situation and forecast import requirements for 2002/03 marketing year (April/March), including food assistance needs. A representative of Southern African Development Community (SADC) Regional Early Warning Unit (REWU) participated in the mission as an observer. The Mission received full cooperation from the Ministry of Agriculture, Cooperatives, and Land Reclamation, Ministry of Economic Planning, Disaster Management Authority, Ministry of Industry, Trade and Marketing, and Bureau of Statistics. Discussions were also held with relevant UN agencies including UNICEF, WHO, UNDP, as well as donor representatives, NGOs, and grain importers. The Mission split into two groups and was able to cover all the ten districts of the country. Interviews were conducted with each District's Principal Secretary and staff from crops, livestock, extension, disaster management, nutrition, and health divisions to get information and their assessment of the situation within their districts. Interviews were also conducted with Village Chiefs, households farmers, and traders. Overall, more than 120 interviews were conducted during the course of the mission. The Mission forecasts 2001/02 cereal production at 53 800 tonnes. Maize production is estimated at 34 500 tonnes, wheat at 14 100 tonnes and sorghum at 5 200 tonnes. Other crops such as beans, potatoes and peas were also observed on most farmers' fields that contribute to the diet of families and cash incomes when grown in larger quantities. The Mission used last year's FAO/WFP assessment mission figures for comparison of cereal production levels. On this basis production for this year will be 33 percent lower than the already reduced production last year. The Mission estimated the total cropped area of 133 600 hectares, about 60 percent of the area in normal years. The drop was partly due to heavy and widespread rains during the land preparation and planting period. Large areas in the lowlands with impermeable clay sub-soils were water-logged and took considerable time to drain and dry for tractors and machinery to operate, coupled with a shortage of tractors and oxen for ploughing in many areas. With an estimated total domestic cereal supply of 74 000 tonnes, and total utilization requirement of 412 000 tonnes (Table 5), the country faces a shortfall of 338 000 tonnes for 2002/03 marketing year. Commercial imports are estimated at 191 000 tonnes, and food aid at 147 000 tonnes, which needs to be met by the Government and external food assistance. The mission has estimated that a total of 444 800 people throughout Lesotho, but particularly in the districts of Qacha's Nek, Quthing and Mohale's Hoek which have been the hardest hit by this year's poor harvestwill require immediate emergency food assistance. Total emergency food assistance is estimated at approximately 68 955 tonnes of food, including such commodities as maize, pulses, vegetable oil and iodised salt. Different approaches to food distributions need to be examined. In less affected areas, self-targeting through food-for-work may be more appropriate than free distribution. In the worst affected areas free distribution will be required. Agriculture in Lesotho faces a catastrophic situation: crop production is declining and could cease altogether over large tracts of the country if steps are not taken to reverse soil erosion, degradation and the decline in soil fertility. The foothill and mountain areas are unsuitable for intensive cropping due to their fragile and poorly structured soils and should concentrate on livestock production. Crop yields are generally very low and declining; in the mid 1970s average maize and sorghum yields were in the order of 1 400kg/ha but today they average 450-550kg/ha. The Kingdom of Lesotho is a landlocked mountainous country of 30 355 km2 that is completely surrounded by South Africa. The entire country lies 1 000 meters above sea level with mountains reaching well over 3 000 meters. Only 406 500 ha (13 percent) of the total land area is arable, the remainder being mountainous . The country is divided into four agro-ecological zones and ten administrative districts. The Lowlands is the most populated and intensively cultivated zone, followed by the Foothills, the Mountains, and the Senqu River Valley which is the smallest zone. Climatic conditions also vary widely by region and altitude - 85 percent of rainfall occurs from October to April, while snow occurs in the mountains from May to September. Lesotho's economic performance over the last decade has been relatively mixed. The early to mid 1990s saw an economic boom that was driven by the construction of the Lesotho Highlands Water Project and the expansion of the manufacturing sector. The GDP grew at an annual average rate of 6.3 percent. However, there was a severe contraction in GDP growth in 1998-99 resulting from civil unrest. Growth resumed in 1999-00 and 2000/01, but at a slower pace of 2.4 percent and 3.2 percent, respectively. It is expected that the growth rate will remain around 3 percent for the current fiscal year. Major contributors to real GDP growth in 2000/01 were agriculture (15 percent), manufacturing and construction (40 percent) and services (36 percent). The budget for fiscal year 2002/03 projects a deficit of M423.5 million before grants-5.5 percent of GDP. However, after grants the deficit drops to M28.1 million. Major budget allocations include 22 percent for education, 8.2 percent for health, and 4.8 percent for agriculture. The latest IMF review of Lesotho's economic performance under the three-year Poverty Reduction and Growth Facility (PRGF) programme was generally favourable. Of the SDR24.5 million (US$31 million) available under the programme, SDR 10.5 million has been released. IMF has acknowledged the Government's overall commitment to the programme and the fact that all quantitative performance criteria have been met. Lesotho has been steadily improving its revenue collection, particularly of income tax, with relatively stable customs revenues (Figure 1). Figure 1: Central Government Revenue Generated by Customs and Income Taxes Under the Southern African Development Community (SADC) Free Trade Area Protocol signed in 1996, Lesotho is committed to gradually removing import restrictions and tariffs over a period of 8 years. The loss in customs revenues is projected to reach 17 percent by the time the Protocol is fully implemented. The recent trade agreement between the European Union and South Africa will further increase the fiscal deficit, as the country will lose its share of revenue from the Southern Africa Customs Union (SACU). Lesotho's current account deficit in 2000/01 improved by 30 percent and the capital and financial accounts together declined by 16 percent, resulting in a relatively better balance of payments position than in 1999-00. However, there still remains a significant trade deficit. The total export earnings average around 25 percent of total imports (Figure 2). The main exports are textiles, footwear, mohair and some live animals. Figure 2: Imports and Exports of Lesotho 1997-98 to 2000-01 The external debt stood at US$ 546.7 million at the end of fiscal year 2000/01. The multilateral component was 76 percent, bilateral 11 percent, and commercial 14 percent. The external debt to GDP ratio was 62.8 percent, and debt service as a percentage of total export revenue was 13.2 percent. Official foreign exchange reserves remain above the target floor set by the Central Bank, at 7.4 months of imports of goods and services. Lesotho's currency, the Maloti, which is pegged at par with the South African Rand, has been declining against the dollar since 1998-1999. During the fiscal year 2001/02, the Moloti fell over 38 percent against the US dollar. Commercial bank lending interest rates during the fiscal year 2000/01 ranged between 16-25 percent. The average unemployment rate for Lesotho is about 30 percent, but is higher in the rural areas. The economy has only been able to absorb about a third of individuals entering the work force every year. The unemployment situation is exacerbated by the continuing retrenchment of Basotho workers from South African mines (Table 1). Since 1991 the number of Basotho working in South Africa has declined by about 50 percent. Source: IMF Country Report 2002 The agricultural sector in Lesotho is facing extremely serious structural problems. The key issues are severe soil and land degradation, lack of proper land and crop husbandry practices, limited use of improved seeds, fertilizers and pesticides, and almost non-existent extension services. Without serious long-term interventions, it is highly probable that crop production will completely cease on large tracts of agricultural land. Lesotho's last agricultural census (1999/00) highlighted the fact that the country's cultivated land has increased from 317 900 to 406 500 hectares between 1989 and 2000, with the increase attributed to extension of cultivation to marginal lands that were previously fallow/grazing land. Unexpected heavy rain fell in late August over most areas of the country, which benefited some early land preparation for the summer cropping season. October was characterized by very wet conditions, particularly during the last ten days, which restricted land preparation and planting activities. November rainfall was normal to above normal in most areas, but was particularly heavy during the first two dekads, further delaying crop establishment, especially in southern districts. Rainfall remained above normal in December and this trend continued through January. However, February was generally dry throughout the country with erratic rainfall (Figure 3). On a cumulative basis, rainfall was above normal for the 2001/02 season, but quantities and distribution were erratic and delayed planting of crops. A widespread frost in March severely affected crops in most districts, and localised hailstorms exacerbated the problem. Figure 3: Actual vs. Normal Monthly Rainfall, September 2001-March 2002 Source: Department of Agro meteorology 1/Northern Lowland, 2/ Foothills, 3/ Southern Lowland, 4/ Mountains The demand for fertilizer is heavily dependent on rainfall and consequently varies from year to year. Fertilizer use for food crop production has ranged between 5 835 tonnes and 9 460 tonnes during the period 1996/97 to 2000/01. This translates into a national average of about 43kg of fertilizers per hectare, which is low by regional standards. Statistics for 2001/02 were not available to the mission, but it is believed that fertilizer usage decreased, largely because of the reduced purchasing power of farmers and excessively wet conditions. The low levels of fertilizer use are despite the fact that farmers of Lesotho have enjoyed highly subsidized fertilizer since the 1980s.The subsidies have ranged from 5 to 30 percent for the period 1994-95 up to the present. The marketing of fertilizers is in transition towards market liberalisation; both the private sector and cooperatives are currently distributing fertilizers that are imported from South Africa. Seeds have been equally subsidised. However, from discussions with farmers, most of the maize varieties used are local or recycled hybrid seed and only 21 percent of wheat seed used by farmers are improved varieties. For sorghum, a negligible amount of improved seeds is used. It was very obvious from the amount of land lying fallow in all districts that large areas of arable land had not been planted during the 2001/02 cropping season, as reflected in Table 2, which compares the estimates of the Mission with last year's Mission estimates. Except in Mohale's Hoek, cereal areas in 2001/02 were lower than in 2000/01, with the national total declining by 22.4 percent. This was due to the heavy and widespread rains from the second half of October to the middle of November, which delayed land preparation and planting. Large lowland areas have an impermeable clay subsoil, which when waterlogged takes considerable time to drain and dry sufficiently for tractors and machinery to work the land. The area planted was further reduced because the optimum planting date of the main food crops (maize and sorghum) was missed, and farmers decided not to plant at all. Another reason was a shortage of tractors and oxen, once conditions permitted land preparation and planting. The area planted to each of the major summer crops in each district is given in Table 3. The total national maize area is estimated at 91 300 ha, while the area under sorghum and wheat is estimated at 13 400 ha and 28 900 ha respectively. The Mission's estimates of crop yields for the year 2001/02 are based on data provided by the Department of Crops, adjusted on the basis of field assessments. The adjusted yield figures are given in Table 3. Yields per hectare are universally poor but highly variable between districts, with southern and central districts (Mafeteng, Mohale's Hoek, Quthing, Qacha's Nek and Thaba-Tseka) showing the lowest. In many areas of these districts the crops produced no grain at all and were being harvested as fodder for livestock. Northern districts (Berea, Leribe, Butha-Buthe and Mokhotlong) were relatively less affected by the disasters and yields were slightly better. During discussions with farmers, District Agricultural Officials, the Ministry of Agriculture at Headquarters, and the Disaster Management Authority Officials, it was established that late planting because of waterlogged fields, widespread early frost and hail were the main causes of the poor crop yields. The most important factor was the late planting of the maize and sorghum crops, for which any delay after the optimum planting date considerably reduces yield. The length of the growing season was further reduced by a widespread early frost, which curtailed crop growth at the grain filling stage. Localised hailstorms also caused serious damage in some districts, and cutworms and stalk borers caused further damage to the crops, particularly those planted late. It was also reported that inputs arrived late in some areas. While private traders market some inputs, these were expensive and largely inaccessible to many farmers who have no source of credit. The overall result was that the majority of farmers used farm-saved, low yielding seeds including recycled hybrid seeds. National average yields of maize and sorghum are estimated at 378 kg/ha and 388 kg/ha, respectively. Combined summer and winter wheat average yields are estimated at around 488 kg/ha. Table 4 compares this year's estimated total cereal production with the estimates made by last year's FAO/WFP Mission. At the time of the visit, some farmers were busy sowing winter wheat that will be harvested in December/January 2002/03. Planting of winter wheat normally starts mid April making use of the residual moisture and small amounts of rainfall. The late rains experienced in April and May should bode well for the winter crop, with soil moisture levels high. It is expected that there will be an increased area planted to winter wheat after the poor summer cropping season. Aggregate cereal production in 2001/02 is estimated at 53 800 tonnes compared to 80 300 tonnes estimated by last year's Mission, a decline of 33 percent on an already poor harvest. Source: Estimates by the Dept. of Crops adjusted by the Mission for year 2001/02 Beans and peas are extensively grown, largely for home consumption, but also for cash when grown in larger quantities. Most households grow beans during the summer in rotation with cereals while peas are grown during the winter using residual moisture and any rain. Bean yields during the last cropping season were extremely low and will considerably reduce the dietary protein available to households. Other crops observed were potatoes, pumpkins, sunflower, fruit trees and vegetables, which will supplement the reduced supplies of maize in household diets. The majority of rural households, (perhaps over 80 percent) own livestock, mainly cattle, sheep and goats. Many also have a horse, two or more donkeys and chickens. Large herds of cattle and flocks of sheep were noted in the mountain areas in particular, where pastures were excellent after the heavy rainy season. However, theft has become a major problem in the country. Thefts occur in and between villages, between districts, and across borders. The situation is getting worse and becoming increasingly dangerous, and is having a serious negative impact on household food security. Livestock are a vital source of cash to purchase food when agricultural production is low, as it is this year; and supply draught power for cultivation. Lesotho is a net importer of maize, wheat, pulses, dairy products and other food commodities. In a typical year, roughly half of the food consumed in the country is imported. For maize, the main staple food, imports represent 60-65 percent of national requirements. Other than for wheat, virtually all imports come from the Republic of South Africa (RSA). In accordance with the SACU agreement, Lesotho does not impose duties on imports from RSA. Thus, food prices in Lesotho are closely linked to those in RSA. The annual inflation rate in February 2002 was 12.9 percent compared with 7.6 percent in October 2001. This increase is largely attributable to higher food prices as a result of domestic and regional food shortages, increasing oil prices, and the depreciation of the South African Rand. Consumer prices for bread and cereal groups rose by over 14 percent between January and February 2002. The price of an 80 kg bag of sifted and un-sifted maize has almost doubled since June 2001 (Figure 4). Figure 4: Prices of 80 Kg Bags of Sifted and Un-sifted Maize (June 2001/May 2002) Source: Marketing Section of Ministry of Agriculture The mission observed that there was no shortage of food products in the markets of all districts. Given the high rate of unemployment in the rural areas, extremely limited income generating opportunities, and high incidence of poverty, the purchasing power of most households is extremely low. People, particularly in the foothills and mountain areas are surviving through bartering, home brewing, selling livestock, reducing consumption, and taking children out of school. Individuals infected with HIV/AIDS are also forced to reduce consumption, when in fact they should be increasing their intake of carbohydrates and proteins by 15 and 50 percent above normal levels. The serious decline in domestic cereal production, combined with decreased cereal production in the region as a whole and hence increased cereal import needs in several countries, is exerting an upward pressure on maize prices, restricting access to food for large segments of the population in Lesotho (and elsewhere in the region). The Government of Lesotho has declared a state of famine, highlighting the seriousness of the current food situation, and introduced a short term plan to assist the most vulnerable groups. Some M14 million (for 5 400 tonnes of un-sifted maize meal) have already been allocated for immediate intervention. The Government has also commissioned Lesotho Flour Mills Ltd. and Lesotho Milling Company to produce 50 kg bags of un-sifted maize meal clearly marked for immediate free distribution to the most vulnerable groups of the population. A 20 percent subsidy on un-sifted maize meal for the general population is being effected through normal market channels. The forecast of the cereal supply-demand situation for the marketing year 2002/03 (April/March) is based on the following assumptions and Mission observations (Table 5): 1/ Sorghum shortfall to be met by maize imports. Table 5 shows a cereal import requirement of 338 000 tonnes. Commercial cereal imports are estimated at 191 000 tonnes and food aid at 147 000 tonnes which will need to be covered by the Government and external assistance. Food security in Lesotho depends on the availability of employment opportunities in addition to the availability of adequate supplies. The most food insecure households in Lesotho are those that have the most difficulty generating sufficient income to meet food needs. Even in years of reasonable harvest and stable prices, some two thirds of Lesotho households are estimated to live below the poverty line (based on income needed to meet basic needs) and nearly half are classified as destitute. The recent dramatic increases in food prices have helped to push a greater proportion of the population below the poverty line, and worsened the situation of those who were already struggling. The great majority of rural Lesotho households must depend on cash income in order to survive. For the average rural household, agriculture of all types accounts for less than ten percent of income. For most households, crop production is only one of many survival strategies. However, livelihood strategies, especially of the poor, give more emphasis to agriculture than appears to be warranted by the economic facts alone. Thus even though agricultural production is never sufficient to meet all food needs, it does provide a vital supplement to other sources of food, as well as employment opportunities (through odd jobs during harvest and other peak demands for agricultural labour) for people who have few other employment options. Hence a crisis in agricultural production reduces employment (and cash) opportunities, while simultaneously forcing people to turn to the market for an increased proportion of their food needs. In the current situation, most rural people are being forced to obtain a higher proportion of food from the market, at the same time as market prices have reached very high levels. Although wage labour is seen as the key to overcoming food insecurity in Lesotho, the unemployment rate - estimated at more than 30 percent nationally - is a major barrier. Employment in the mines of South Africa has traditionally been the prime source of income for male workers. But restructuring of the mining industry towards less labour-intensive production, combined with the depressed prices for gold in the late 1990s and South Africa's preferential employment policy for its nationals, have restricted employment opportunities, and the flow of remittances back to Lesotho, plummeted over the past decade. Thus traditional coping strategies based upon seeking wage employment in South Africa are no longer viable for most of the rural poor. Livestock ownership, including cattle, sheep and goats, is seen as a major safety net for rural households. Livestock are used as a "savings bank", and are a vital source of income to purchase cereals when agricultural production is low. However poor households usually have few, if any, livestock resources on which to rely. The situation has been exacerbated by rampant and increasing livestock theft, which has become a serious threat to rural livelihoods. Even in relatively normal periods, carbohydrates (of which, roughly 80 percent is maize) account for on average three quarters of total calorie intake, and vegetable sources provide most protein. The dramatic increases in food prices has resulted in increased cereal food purchases by poor households, with a corresponding decline in purchases of other foodstuffs. Thus, there has been a decline in both total energy consumption and consumption of micro-nutrients. As a result, nutritional deficiencies have become a growing concern. Intake of animal protein is negligible in most rural areas. Vegetable intake depends on whatever is harvested from small kitchen gardens, or can be gathered wild. The increased reliance on maize to meet most energy needs has resulted in an increase in Pellagara. Intake of iodised salt has also fallen. The extremely high prevalence of HIV/AIDS in Lesotho affects all districts, and has serious repercussions for household food security, including: In other words, households affected by HIV/AIDS have fewer income-earning opportunities, but simultaneously face higher food and non-food costs. In early 2002 WFP undertook a Vulnerability Analysis and Mapping (VAM) of food security and vulnerability in Lesotho, based on secondary data sources. On the basis of 13 indicators, the VAM analysis identified the mountain areas of Lesotho as having the greatest food insecurity and highest levels of vulnerability. Mohale's Hoek, Qacha's Nek and Quthing Districts were identified as being the most vulnerable to food insecurity. These are the districts that have been most affected by the 2002 food crisis and are particularly vulnerable to climatic shocks. Most of the population is poor by income and asset measures, and coping strategies are largely based on agriculture. The remoteness from urban centres restricts market access and employment opportunities. Thus the reduced agricultural production of 2002 has severely exacerbated the situation in an already food insecure region. The VAM analysis further identified the mountain districts of Thaba Tseka and Mokhotlong as having severe levels of income poverty, but less vulnerable to shocks to agricultural production caused by abnormal weather. These districts have been less affected by the 2002 food crisis. Of the lowlands and foothills areas, only those of Quthing were identified by the VAM analysis as being vulnerable to food insecurity. In the current crisis, the foothills and lowlands of Qacha's Nek, Mohale's Hoek, Mafeting and Maseru Districts have also been severely affected. The VAM analysis also noted that even in wealthier districts, such as Maseru, Leribe and Berea, mountain areas tended to contain vulnerable and food insecure populations. Of these areas, the Maseru mountain areas have been most affected by the current crisis. The VAM analysis identified the most food insecure households in Lesotho as having the following characteristics: In other words, the households that suffer the deepest poverty and food insecurity in Lesotho are those headed by individuals who have few employment opportunities and few assets. Market interventions (price subsidies, monetization of donated food aid) could help improve the overall food security situation, by lowering prices and thus increasing accessibility. However, for a significant proportion of the population, the severity of the food and poverty situation this year, along with the reduced availability and effectiveness of usual coping strategies, means that market interventions are unlikely to be sufficient to bring food prices within their reach. For these people, some sort of targeted food assistance will be required. On the basis of the characteristics of the most vulnerable households, as determined by the WFP VAM analysis, and the level of expected harvest in 2002, it is estimated that some 444 800 people will require targeted food aid in 2002/03 (Table 6). Not all these people will require external food assistance for the whole year. Major parts of Qacha's Nek, Quthing and Mohale's Hoek have been the areas hardest hit by this year's agricultural crisis. This is the second year that these districts have suffered from a poor harvest. Vulnerable people in these districts will require full food rations with immediate effect, up until the next harvest period (April-May 2003). Although the harvest has also been poor in Thaba Tseka, Mafeteng and part of Mohale's Hoek, households are likely to obtain some production from their fields, which should support them for some months. Thus targeted food assistance to meet full food requirements is likely to be required for nine months. Mokhotlong has achieved a better harvest than the other mountain districts, and the mountain areas of Butha Buthe and Maseru have a greater range of coping strategies. Consequently these areas are likely to require targeted food assistance for a shorter period - six months. (Alternatively, half rations could be provided for 12 months). It may also be expected that, even in the worst affected areas, most households will have some, even if limited, options to obtain food through various coping strategies. Thus it is expected that direct food assistance would be required as follows: Food rations supplied through direct distribution should meet overall calorie needs, taking into account the extra calorie needs of people living in cold areas (at least during winter), and requiring additional energy to meet the physically demanding way of rural life in Lesotho. Rations should also be sufficient to meet the additional calorie requirements of people affected by HIV/AIDS (which can be assumed to affect at least one third of all beneficiaries). Insofar as possible, the rations should also meet the basic micro-nutrient requirements of a population whose diet has consisted almost entirely of maize meal. Consequently, it is expected that approximately 68 955 tonnes of food, including such commodities as maize, pulses, vegetable oil and iodised salt will be required for direct food assistance. Different approaches to food distributions should be examined. In less affected areas, self-targeting through food-for-work may be more appropriate than free distribution. In the worst affected areas (Qacha's Nek, Quthing and Mohale's Hoek), free distribution will be required. However the implementation of a broad programme of free distribution should be based on a strict registration system to ensure that food aid is targeted to those most in need. Agriculture in Lesotho, which has struggled for many years, is currently facing a catastrophic situation. Crop production could cease altogether over large tracts of the country unless steps are taken to reverse soil erosion, soil degradation and the decline in soil fertility. The foothill and mountain areas are unsuitable for intensive cropping on the fragile and poorly structured soils and should concentrate on livestock production. The physical soil conservation structures throughout the country originally designed and established when the soils were stable and of good quality, have deteriorated alarmingly and erosion has escalated as soils have become more leached, less structured and unable to hold moisture and support crop production. These terrace ridges/contours in use with the degraded soils now commonplace throughout Lesotho need to be constructed much closer together in order to deal with the increased runoff and erosion. However, this is a monumental task which would require massive funding. In addition, such physical runoff control measures can only be used safely and effectively in support of optimum soil management, together with better crop and livestock husbandry practices. Declining cereal and other crop yields are the result of a combination of factors including the continued, unsustainable use of land resources in the country, unfavourable climatic factors and worsening crop husbandry practices. Crop yields are in general very low because most of the cultivated soils have low levels of fertility, high acidity, low organic matter content and poor moisture retention capacities. As soil fertility has declined, yield levels have also decreased. In the mid 1970s average maize and sorghum yields were in the order of 1 400kg/ha. Today the average is 450-550kg/ha. Maize and sorghum cannot continue to be mono-cropped year after year. Rotations, fallows and mixed, relay and inter-cropping practices with leguminous (particularly) and other crops must become part of the farming system. In Berea District, this technique was noted by the Mission on a visit to an area of land (15 hectares), originally earmarked for an irrigation scheme. The scheme did not materialise, but the land had been under lucerne/fallow for five years; it was planted to maize and sorghum this season by a number of individual small farmers. The resultant crops were infinitely better than anywhere else in the country. Estimated yield of maize was 6.5-7 tonnes/ha and for sorghum 4-5 tonnes/ha. This also compared with maize variety trials conducted under good management nearby, with estimated yields of only 2-2.5 tonnes/ha (one third of the yield), and local farmer yields of 0.4-0.5 tonnes/ha (one fifteenth of the yield). The concept of an enriched fallow (containing legumes) in the crop rotation cannot be overemphasised. Farmers should be encouraged to produce only one good grain crop a year on their land, utilising the best crop husbandry techniques available. After harvest, a suitable fallow crop should be established to help improve soil fertility, soil structure and soil moisture retention capacity for the next food grain crop. As recommended in the Soil Fertility Initiative Document, prepared for Lesotho by FAO (1999), what is needed is a comprehensive participatory approach that takes advantage of synergies of practices at field level, offering production, economic and conservation benefits. This approach would emphasise building of soil organic matter levels through proper use of inorganic fertilizers, manure and ash, coupled with intercropping of improved cereals and legumes, conservation farming and agro-forestry practices. The overall benefits are the improvement of soil structure and fertility, food security, cash incomes, dietary diversity and protection of the environment. The improved soil structure and fertility result in increased efficiency in plant nutrients uptake and water storage, thus enhancing the profitability of crop production as well as enabling crops to withstand dry periods and drought. Another major issue is that the majority of farmers around the country are unable to follow any of these initiatives or improve their crop husbandry practices, because they are isolated and marginalised within the system. The agricultural extension service in the villages and field areas is totally inadequate - very understaffed, lacking in motivation and short of transport. 1. Land Tenure: A study of the present land tenure situation in Lesotho, together with a strategy to promote secure access to land for farmers throughout the country should be carried out. In addition to expanding access to credit and limiting existing disputes, the development of an effective tenure system will have a profound impact on the ability of communities to enter into productive partnership arrangements and to intensify production. Some aspects of the traditional land tenure system work against the adoption of soil restorative practices. Land that has not been cultivated for three successive years can be reallocated to another household, thus mitigating against the use of fallows in crop rotations. Furthermore farm households only have exclusive centers to their crop fields up until the time the crop has been harvested. Thereafter the land and any remaining crop residues becomes an open access grazing resource until the next cropping season, so it would be going against the social norms of the community for an individual household to fence its crop fields. Such free grazing can also lead to the destruction of grassed waterways and conservation banks within the arable lands. 2. Watershed Management: An FAO/TCP project undertaken in 1988/89 was instrumental in introducing to Lesotho the concepts and principles of a broader more holistic approach to soil and water conservation known as "better land husbandry". Within this approach, the technical focus for soil conservation is on combating soil productivity decline, which is a result not only of soil erosion but also of changes in a soil's biological, chemical and physical properties. Following on from this work, there is now a need for a broader study of complete watersheds in order to improve their management and long-term sustainability, and to benefit downstream farmers and the country as a whole. 3. Conservation Agriculture Technology: This technology has proved to be extremely successful in many countries in Africa and around the world. It conserves, improves and makes more efficient use of natural resources through integrated management of available soil, water and biological resources. It leads to environmental conservation as well as enhanced and sustained agricultural production. It is a no tillage system involving the maintenance of crop cover (live or dead) on the soil surface, and direct seeding or planting of crops through this cover using specialised equipment. Besides protecting the soil and the crop against erosion and water loss by run-off or evaporation, the soil cover also inhibits the germination of many weed seeds. A programme should be devised under a TCP project to provide a national level conceptual and policy framework for the formulation and implementation of a series of area based and farmer centred field projects, with complementary institutional strengthening and in-service training programmes at national and district level. 4. Improved seed production and promotion at community level, and assistance to enhance the performance of the livestock sector. This report is prepared on the responsibility of the FAO and WFP Secretariats with information from official and unofficial sources. Since conditions may change rapidly, please contact the undersigned for further information if required. Office of the Chief GIEWS, FAO Ms. J. Lewis Regional Director, ODK, WFP: Please note that same way as the worldwide list. 1 Lesotho Agricultural Census 1999/2000. Volume 1: Rural Households and Crop Statistics. Bureau of Statistics (BOS), Lesotho.
http://www.fao.org/docrep/005/y6813e/y6813e00.htm
CC-MAIN-2017-04
en
refinedweb