text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
RFORK(2) BSD Programmer's Manual RFORK(2) rfork - control new processes #include <sys/param.h> #include <unistd.h> int rfork(int flags);set, the parent and child will share the parent's file descriptor. The parent process returns the process ID (PID) of the child process. The child process returns 0. The range of the process ID is defined in <sys/proc.h> and is currently between 1 and 32766, inclusive.>. _exit(2), execve(2), fork(2), intro(2), vfork(2) The rfork() function first appeared in Plan 9. MirOS BSD #10-current June 17,.
https://www.mirbsd.org/htman/i386/man2/rfork.htm
CC-MAIN-2015-40
en
refinedweb
roadhouse 0. Within roadhouse, a config can be applied to a VPC. This allows the same configuration to be used across multiple VPCs. It’s useful in cases where you want to run multiple VPCs with the same configuration. For instance, this is useful when running across multiple datacenters for fault tolerance. Config File Syntax The config file is YAML based. Groups are the top level object. Within a group are options and rules. Rules are specified using a syntax similar to tcpdump (at a very, very trivial level). For ICMP protocol we use ICMP Type Numbers for port. More information is available at: - <protocol:optional, tcp by default> <port> <group_or_ip_mask_optional> It should be easier to understand a valid configuration based on example: - test_database_group: - - options: - description: cassandra and redis prune: true # remove rules not listed here - rules: - - tcp port 22 166.1.1.1/32 # mysterious office IP - tcp port 9160, 6379 test_web_group # refer to a group by name - port 55 192.168.1.1 # /32 by default - tcp port 22-50, 55-60 192.168.1.1 - icmp port 0 192.168.1.1 # ICMP Type 0; Echo Reply - test_web_group: - - options: - description: web servers prune: false # false by default - rules: - - tcp port 80 0.0.0.0/0 - icmp port 8 192.168.1.1/32 # ICMP Type 8; Timestamp Usage from roadhouse.group import SecurityGroupsConfig v = vpc.connect_to_region(‘us-west-1’) e = ec2.connect_to_region(‘us-west-1’) # assuming you only have 1 vpc already created # otherwise you’ll need to pick the right VPC you want # to apply your changes to vpc = v.get_all_vpcs()[0] config = SecurityGroupsConfig.load(“roadhouse.yaml”) config.configure(ec2_conn) config.apply(vpc) Development In a virtualenv, pip install -r requirements - Downloads (All Versions): - 11 downloads in the last day - 106 downloads in the last week - 360 downloads in the last month - Author: Jon Haddad - Keywords: aws,yaml,configuration - License: BSD 2 Clause - Categories - Package Index Owner: rustyrazorblade - DOAP record: roadhouse-0.6.xml
https://pypi.python.org/pypi/roadhouse
CC-MAIN-2015-40
en
refinedweb
IRC log of er on 2011-11-23 Timestamps are in UTC. 13:28:17 [RRSAgent] RRSAgent has joined #er 13:28:17 [RRSAgent] logging to 13:28:19 [trackbot] RRSAgent, make logs public 13:28:19 [Zakim] Zakim has joined #er 13:28:21 [trackbot] Zakim, this will be 3794 13:28:21 [Zakim] ok, trackbot; I see WAI_ERTWG()8:30AM scheduled to start in 2 minutes 13:28:22 [trackbot] Meeting: Evaluation and Repair Tools Working Group Teleconference 13:28:22 [trackbot] Date: 23 November 2011 13:28:52 [cstrobbe] cstrobbe has joined #er 13:29:22 [Zakim] WAI_ERTWG()8:30AM has now started 13:29:29 [Zakim] +[IPcaller] 13:29:33 [shadi] zakim, ipcaller is me 13:29:33 [Zakim] +shadi; got it 13:30:33 [shadi] agenda+ Welcome 13:30:33 [shadi] agenda+ Complete log of issues 13:30:33 [shadi] agenda+ EARL 1.0 Test Suite 13:30:33 [shadi] agenda+ EARL 1.0 Checker(s) 13:30:33 [shadi] agenda+ REC-track of EARL modules? 13:30:34 [shadi] agenda+ Upcoming meeting schedule 13:30:58 [philipA] philipA has joined #er 13:31:11 [Zakim] +Christophe_Strobbe 13:31:33 [Zakim] +??P10 13:31:57 [carlosV] carlosV has joined #er 13:31:57 [philipA] Zakim, ??P10 is philipA 13:31:57 [Zakim] +philipA; got it 13:32:02 [Zakim] + +030001aaaa 13:32:25 [shadi] zakim, aaaa is Emmanuelle 13:32:26 [Zakim] +Emmanuelle; got it 13:32:36 [Zakim] +??P12 13:32:55 [shadi] zakim, ??p12 is CarlosV 13:32:55 [Zakim] +CarlosV; got it 13:33:06 [shadi] zakim, who is on the phone? 13:33:06 [Zakim] On the phone I see shadi, Christophe_Strobbe, philipA, Emmanuelle (muted), CarlosV 13:33:54 [shadi] agenda? 13:34:51 [shadi] zakim, take up agendum 2 13:34:51 [Zakim] agendum 2. "Complete log of issues" taken up [from shadi] 13:35:04 [cstrobbe] scribe: cstrobbe 13:35:24 [shadi] - < > 13:35:24 [shadi] - < > 13:35:24 [shadi] - < > 13:35:24 [shadi] - < > 13:35:26 [shadi] - < > 13:36:36 [cstrobbe] and the other URLs 13:36:56 [cstrobbe] EARL Guide needs editing. 13:37:41 [cstrobbe] WG participants should check the list, the resolutions and the open issues. 13:38:29 [cstrobbe] CarlosV: Talk to Semantic Web people on OWL etc? 13:38:53 [cstrobbe] Shadi: See first 5 open issues. 13:39:30 [cstrobbe] Shadi: Could use some help on design of the Schema (OWL constraints, conformance requirements). 13:40:17 [cstrobbe] Shadi: Also, send your thoughts to the list. 13:40:41 [cstrobbe] Shadi: Third issue: dc or dct prefix: conflicting comments. 13:40:56 [cstrobbe] Shadi: Samuel was also going to look into this. 13:41:31 [cstrobbe] CarlosV: Elsewhere comments preferred dct. 13:42:16 [cstrobbe] Shadi: Fourth issue: profiles for conformance. About how to phrase conformance requirements. 13:42:51 [cstrobbe] Shadi: Some other W3C specs use profiles. Please look into this and discuss it on the mailing list. 13:43:18 [cstrobbe] Shadi: Fifth open issue: move conformance to the top of the document. 13:44:44 [cstrobbe] CStrobbe: other W3C specs: no consistency in where they put the conformance section. 13:45:12 [cstrobbe] Shadi: Pending = we took a resolution but action is required to update the draft. 13:45:53 [shadi] 13:46:01 [cstrobbe] Shadi: Open issues for the EARL Guide. 13:46:46 [cstrobbe] Shadi: dc vs dct prefix issue propagates through all the EARL docs. 13:48:41 [cstrobbe] Shadi: open issue on overlap between TAP and xUnit : no overlap; see we could take a resolution today. 13:49:06 [cstrobbe] Shadi: subsequent discussion with TAP people: e.g. TAP also supports streaming. 13:50:42 [cstrobbe] RESOLUTION: The use cases for TAP and xUnit are significantly different from EARL so that further work is not necessary. 13:51:11 [shadi] zakim, take up agendum 3 13:51:12 [Zakim] agendum 3. "EARL 1.0 Test Suite" taken up [from shadi] 13:51:18 [shadi] 13:51:57 [cstrobbe] Shadi: EARL can't pass Candidate Recommendation phase without a test suite. 13:52:16 [cstrobbe] Shadi: A test suite would help evaluate an implementation. 13:52:40 [cstrobbe] Shadi: We need to think about both EARL input and EARL output. 13:52:47 [shadi] 13:53:01 [cstrobbe] Shadi: minimal test file / EARL report. 13:53:36 [cstrobbe] Shadi: The second test could be a complete example with example code from the spec. 13:53:49 [cstrobbe] Shadi: We have something like that in the EARL Guide. 13:54:38 [cstrobbe] Shadi: A complete working example that we can use throughout. Using all the features. 13:55:21 [cstrobbe] CarlosV: Working on HTTP-in-RDF. 13:55:51 [philipA] <rdf:RDF 13:55:52 [philipA] xmlns:cs=" " 13:55:54 [philipA] xmlns:rdf=" " 13:55:55 [philipA] xmlns:foaf=" " 13:55:57 [philipA] xmlns:dct=" " 13:55:58 [philipA] xmlns:rdfs=" " 13:56:00 [philipA] xml:base=" "> 13:56:01 [philipA] <cs:ComplianceTest rdf: 13:56:03 [philipA] <dct:source rdf:resource=" "/> 13:56:04 [philipA] <dct:source rdf:resource=" "/> 13:56:06 [philipA] <cs:type rdf:resource=" "/> 13:56:07 [philipA] <dct:license rdf:resource=" "/> 13:56:09 [philipA] <dct:rightsHolder> 13:56:10 [philipA] <foaf:Organization rdf:about=" "/> 13:56:12 [philipA] </dct:rightsHolder> 13:56:13 [philipA] <dct:conformsTo rdf:resource=" "/> 13:56:15 > 13:56:18 [philipA] <dct:title>F30: Failure of Success Criterion 1.1.1 and 1.2.1 due to using text alternatives that are not alternatives (e.g., filenames or placeholder text)</dct:title> 13:56:21 [philipA] </cs:ComplianceTest> 13:56:23 [philipA] <foaf:Organization rdf:about=" "> 13:56:25 [philipA] <rdfs:label>Fraunhofer Gesellschaft</rdfs:label> 13:56:27 [philipA] <foaf:homepage rdf:resource=" "/> 13:56:29 [philipA] </foaf:Organization> 13:56:31 [philipA] </rdf:RDF> 13:56:57 [carlosV] carlosV has joined #er 13:58:42 [cstrobbe] CarlosV: working on test case description; cf TCDL. 13:59:04 [cstrobbe] Carlos: Also have expected outcome and pointers. 14:00:04 [cstrobbe] CarlosV: SPARQL was also under consideration. 14:01:48 [shadi] 14:04:13 [samuelm] samuelm has joined #er 14:06:04 [Zakim] +??P14 14:06:14 [Zakim] -Emmanuelle 14:06:53 [shadi] zakim, ??p14 is samuelm 14:06:53 [Zakim] +samuelm; got it 14:09:30 [cstrobbe] Shadi: We should generate tests, each of which highlights a different feature, e.g. subclass Info, ... 14:10:06 [cstrobbe] CarlosV: We need test files and derived EARL reports for these test files. 14:13:10 [cstrobbe] Shadi: For the test suite, we want test reports rather than test descriptions. 14:14:33 [cstrobbe] Shadi: First example is minimal; could use earl:Assertor instead. 14:15:23 [cstrobbe] Shadi: If everyone generates 5-10 files, we have a nice set. 14:15:33 [cstrobbe] CarlosV: CVS access? 14:16:05 [cstrobbe] Shadi: Yes, or just send them to me. But we will need to think about where to put them for public access. 14:16:37 [cstrobbe] ChristopheS: Then, what exactly do we do with the test files? 14:17:05 [cstrobbe] Shadi: Tools that input EARL... 14:17:36 [shadi] zakim, take up agendum 4 14:17:36 [Zakim] agendum 4. "EARL 1.0 Checker(s)" taken up [from shadi] 14:18:14 [cstrobbe] Shadi: We need EARL checkers: software that checks the report to check that it is valid. Cf conformance requirements. 14:18:58 [cstrobbe] Shadi: There are PHP RDF libraries and libraries in other languages. A checker could be fairly simple. 14:23:33 [cstrobbe] Shadi: We'll need at least the test suite to enter CR; we will also need the EARL checker to exit Candidate Recommendation phase. 14:23:41 [shadi] zakim, take up agendum 5 14:23:41 [Zakim] agendum 5. "REC-track of EARL modules?" taken up [from shadi] 14:24:14 [cstrobbe] Shadi: Open issue: other specification on Recommendation track. 14:24:47 [cstrobbe] Shadi: This would imply more test suites, implementation documentation and more checkers. 14:25:35 [cstrobbe] Shadi: Keeping modules off REC track, to avoid getting locked in CR phase? 14:26:05 [cstrobbe] Shadi: Currently, only EARL Schema is on REC track; all other documents are planned to become WG Notes. 14:26:23 [cstrobbe] Shadi: Any objections or comments? 14:27:22 [cstrobbe] CarlosV: Implementation could be quite complex with some of the specs. 14:28:29 [cstrobbe] RESOLUTION: Keep EARL modules as Working Group Notes. 14:29:15 [cstrobbe] Shadi: Open action items. 14:29:52 [cstrobbe] samuelm: Contradictory sources regarding dc versus dct namespace prefix. 14:30:23 [cstrobbe] Shadi: On Dublin Core site: contact them on the latest status. 14:31:18 [cstrobbe] samuelm: dc or dct is just a convention; we may not get a definitive answer from Dublin Core. 14:31:37 [shadi] zakim, take up agendum 6 14:31:37 [Zakim] agendum 6. "Upcoming meeting schedule" taken up [from shadi] 14:31:51 [Zakim] -CarlosV 14:32:11 [cstrobbe] Shadi: No meeting next week, nor on 7 December. 14:33:03 [cstrobbe] Shadi: By next meeting (14 December): drafts for WG member to look at. Please also work on the test suites and possibly an EARL checker. 14:33:15 [Zakim] -samuelm 14:33:16 [Zakim] -philipA 14:33:18 [Zakim] -Christophe_Strobbe 14:33:19 [samuelm] samuelm has left #er 14:33:21 [Zakim] -shadi 14:33:22 [Zakim] WAI_ERTWG()8:30AM has ended 14:33:25 [Zakim] Attendees were shadi, Christophe_Strobbe, philipA, +030001aaaa, Emmanuelle, CarlosV, samuelm 14:33:33 [shadi] trackbot, end meeting 14:33:33 [trackbot] Zakim, list attendees 14:33:33 [Zakim] sorry, trackbot, I don't know what conference this is 14:33:34 [trackbot] RRSAgent, please draft minutes 14:33:34 [RRSAgent] I have made the request to generate trackbot 14:33:35 [trackbot] RRSAgent, bye 14:33:35 [RRSAgent] I see no action items
http://www.w3.org/2011/11/23-er-irc
CC-MAIN-2015-40
en
refinedweb
One? At the Broad, we typically put it somewhere like this: /home/radon01/depristo/work/local/scala-2.7.5.final Next, create a symlink from this directory to trunk/scala/installation: ln -s /home/radon01/depristo/work/local/scala-2.7.5.final trunk/scala/installation Right now the only way to get scala walkers into the GATK is by explicitly setting your CLASSPATH in your .my.cshrc file: setenv CLASSPATH /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/FourBaseRecaller.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/GenomeAnalysisTK.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/Playground.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/StingUtils.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/bcel-5.2.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/colt-1.2.0.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/google-collections-0.9.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/javassist-3.7.ga.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/junit-4.4.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/log4j-1.2.15.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/picard-1.02.63.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/picard-private-875.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/reflections-0.9.2.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/sam-1.01.63.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/simple-xml-2.0.4.jar:/humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/GATKScala.jar:/humgen/gsa-scr1/depristo/local/scala-2.7.5.final/lib/scala-library.jar Really this needs to be manually updated whenever any of the libraries are updated. If you see this error: Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.reflections.util.VirtualFile.iterable(VirtualFile.java:79) at org.reflections.util.VirtualFile$5.transform(VirtualFile.java:169) at org.reflections.util.VirtualFile$5.transform(VirtualFile.java:167) at org.reflections.util.FluentIterable$3.transform(FluentIterable.java:43) at org.reflections.util.FluentIterable$3.transform(FluentIterable.java:41) at org.reflections.util.FluentIterable$ForkIterator.computeNext(FluentIterable.java:81) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:132) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:127) at org.reflections.util.FluentIterable$FilterIterator.computeNext(FluentIterable.java:102) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:132) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:127) at org.reflections.util.FluentIterable$TransformIterator.computeNext(FluentIterable.java:124) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:132) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:127) at org.reflections.Reflections.scan(Reflections.java:69) at org.reflections.Reflections.<init>(Reflections.java:47) at org.broadinstitute.sting.utils.PackageUtils.<clinit>(PackageUtils.java:23) It's because the libraries aren't updated. Basically just do an ls of your trunk/dist directory after the GATK has been build, make this your classpath as above, and tack on: /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/GATKScala.jar:/humgen/gsa-scr1/depristo/local/scala-2.7.5.final/lib/scala-library.jar A command that almost works (but you'll need to replace the spaces with colons) is: #setenv CLASSPATH $CLASSPATH `ls /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/*.jar` /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/GATKScala.jar:/humgen/gsa-scr1/depristo/local/scala-2.7.5.final/lib/scala-library.jar All of the Scala source code lives in scala/src, which you build using ant scala There are already some example Scala walkers in scala/src, so doing a standard checkout, installing scala, settting up your environment, should allow you to run something like: gsa2 ~/dev/GenomeAnalysisTK/trunk > ant scala Buildfile: build.xml init.scala: scala: [echo] Sting: Compiling scala! [scalac] Compiling 2 source files to /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/scala/classes [scalac] warning: there were deprecation warnings; re-run with -deprecation for details [scalac] one warning found [scalac] Compile suceeded with 1 warning; see the compiler output for details. [delete] Deleting: /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/GATKScala.jar [jar] Building jar: /humgen/gsa-scr1/depristo/dev/GenomeAnalysisTK/trunk/dist/GATKScala.jar Until we can include Scala walkers along with the main GATK jar (avoiding the classpath issue too) you have to invoke your scala walkers using this syntax: java -Xmx2048m org.broadinstitute.sting.gatk.CommandLineGATK -T BaseTransitionTableCalculator -R /broad/1KG/reference/human_b36_both.fasta -I /broad/1KG/DCC_merged/freeze5/NA12878.pilot2.SLX.bam -l INFO -L 1:1-100 Here, the BaseTransitionTableCalculator walker is written in Scala and being loaded into the system by the GATK walker manager. Otherwise everything looks like a normal GATK module. The. The ); The. Most GATK walkers are really too complex to easily test using the standard unit test framework. It's just not feasible to make artificial read piles and then extrapolate from simple tests passing whether the system as a whole is working correctly. However, we need some way to determine whether changes to the core of the GATK are altering the expected output of complex walkers like BaseRecalibrator or SingleSampleGenotyper. In additional to correctness, we want to make sure that the performance of key walkers isn't degrading over time, so that calling snps, cleaning indels, etc., isn't slowly creeping down over time. Since we are now using a bamboo server to automatically build and run unit tests (as well as measure their runtimes) we want to put as many good walker tests into the test framework so we capture performance metrics over time. To make this testing process easier, we've created a WalkerTest framework that lets you invoke the GATK using command-line GATK commands in the JUnit system and test for changes in your output files by comparing the current ant build results to previous run via an MD5 sum. It's a bit coarse grain, but it will work to ensure that changes to key walkers are detected quickly by the system, and authors can either update the expected MD5s or go track down bugs. The system is fairly straightforward to use. Ultimately we will end up with JUnit style tests in the unit testing structure. In the piece of code below, we have a piece of code that checks the MD5 of the SingleSampleGenotyper's GELI text output at LOD 3 and LOD 10. package org.broadinstitute.sting.gatk.walkers.genotyper; import org.broadinstitute.sting.WalkerTest; import org.junit.Test; import java.util.HashMap; import java.util.Map; import java.util.Arrays; public class SingleSampleGenotyperTest extends WalkerTest { @Test public void testLOD() { HashMap<Double, String> e = new HashMap<Double, String>(); e.put( 10.0, "e4c51dca6f1fa999f4399b7412829534" ); e.put( 3.0, "d804c24d49669235e3660e92e664ba1a" ); for ( Map.Entry<Double, String> entry : e.entrySet() ) { WalkerTest.WalkerTestSpec spec = new WalkerTest.WalkerTestSpec( " %s --variant_output_format GELI -L 1:10,000,000-11,000,000 -m EMPIRICAL -lod " + entry.getKey(), 1, Arrays.asList(entry.getValue())); executeTest("testLOD", spec); } } } The fundamental piece here is to inherit from WalkerTest. This gives you access to the executeTest() function that consumes a WalkerTestSpec: public WalkerTestSpec(String args, int nOutputFiles, List<String> md5s) The WalkerTestSpec takes regular, command-line style GATK arguments describing what you want to run, the number of output files the walker will generate, and your expected MD5s for each of these output files. The args string can contain %s String.format specifications, and for each of the nOutputFiles, the executeTest() function will (1) generate a tmp file for output and (2) call String.format on your args to fill in the tmp output files in your arguments string. For example, in the above argument string varout is followed by %s, so our single SingleSampleGenotyper output is the variant output file. When you add a walkerTest inherited unit test to the GATK, and then build test, you'll see output that looks like: [junit] WARN 13:29:50,068 WalkerTest - -------------------------------------------------------------------------------- [junit] WARN 13:29:50,068 WalkerT,408 WalkerTest - => testLOD PASSED [junit] WARN 13:30:39,408 WalkerTest - => testLOD PASSED [junit] WARN 13:30:39,409 WalkerTest - -------------------------------------------------------------------------------- [junit] WARN 13:30:39,409 WalkerT - => testLOD PASSED [junit] WARN 13:31:30,213 WalkerTest - => testLOD PASSED [junit] WARN 13:31:30,214 SingleSampleGenotyperTest - [junit] WARN 13:31:30,214 SingleSampleGenotyperTest - We keep all of the permenant GATK testing data in: /humgen/gsa-scr1/GATK_Data/Validation_Data/ A good set of data to use for walker testing is the CEU daughter data from 1000 Genomes: gsa2 ~/dev/GenomeAnalysisTK/trunk > ls -ltr /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_1*.bam /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_1*.calls -rw-rw-r--+ 1 depristo wga 51M 2009-09-03 07:56 /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.SLX.bam -rw-rw-r--+ 1 depristo wga 185K 2009-09-04 13:21 /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.SLX.lod5.variants.geli.calls -rw-rw-r--+ 1 depristo wga 164M 2009-09-04 13:22 /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.SLX.lod5.genotypes.geli.calls -rw-rw-r--+ 1 depristo wga 24M 2009-09-04 15:00 /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.SOLID.bam -rw-rw-r--+ 1 depristo wga 12M 2009-09-04 15:01 /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.454.bam -rw-r--r--+ 1 depristo wga 91M 2009-09-04 15:02 /humgen/gsa-scr1/GATK_Data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.allTechs.bam The tests depend on a variety of input files, that are generally constrained to three mount points on the internal Broad network: */seq/ */humgen/1kg/ */humgen/gsa-hpprojects/GATK/Data/Validation_Data/ To run the unit and integration tests you'll have to have access to these files. They may have different mount points on your machine (say, if you're running remotely over the VPN and have mounted the directories on your own machine). Every file that generates an MD5 sum as part of the WalkerTest framework will be copied to <MD5>. integrationtest in the integrationtests subdirectory of the GATK trunk. This MD5 database of results enables you to easily examine the results of an integration test as well as compare the results of a test before/after a code change. For example, below is an example test for the UnifiedGenotyper that, due to a code change, where the output VCF differs from the VCF with the expected MD5 value in the test code itself. The test provides provides the path to the two results files as well as a diff command to compare expected to the observed MD5: [junit] -------------------------------------------------------------------------------- [junit] Executing test testParameter[-genotype] with GATK arguments: -T UnifiedGenotyper -R /broad/1KG/reference/human_b36_both.fasta -I /humgen/gsa-hpprojects/GATK/data/Validation_Data/NA12878.1kg.p2.chr1_10mb_11_mb.SLX.bam -varout /tmp/walktest.tmp_param.05997727998894311741.tmp -L 1:10,000,000-10,010,000 -genotype [junit] ##### MD5 file is up to date: integrationtests/ab20d4953b13c3fc3060d12c7c6fe29d.integrationtest [junit] Checking MD5 for /tmp/walktest.tmp_param.05997727998894311741.tmp [calculated=ab20d4953b13c3fc3060d12c7c6fe29d, expected=0ac7ab893a3f550cb1b8c34f28baedf6] [junit] ##### Test testParameter[-genotype] is going fail ##### [junit] ##### Path to expected file (MD5=0ac7ab893a3f550cb1b8c34f28baedf6): integrationtests/0ac7ab893a3f550cb1b8c34f28baedf6.integrationtest [junit] ##### Path to calculated file (MD5=ab20d4953b13c3fc3060d12c7c6fe29d): integrationtests/ab20d4953b13c3fc3060d12c7c6fe29d.integrationtest [junit] ##### Diff command: Examining the diff we see a few lines that have changed the DP count in the new code > | head 385,387c385,387 < 1 10000345 . A . 106.54 . AN=2;DP=33;Dels=0.00;MQ=89.17;MQ0=0;SB=-10.00 GT:DP:GL:GQ 0/0:25:-0.09,-7.57,-75.74:74.78 < 1 10000346 . A . 103.75 . AN=2;DP=31;Dels=0.00;MQ=88.85;MQ0=0;SB=-10.00 GT:DP:GL:GQ 0/0:24:-0.07,-7.27,-76.00:71.99 < 1 10000347 . A . 109.79 . AN=2;DP=31;Dels=0.00;MQ=88.85;MQ0=0;SB=-10.00 GT:DP:GL:GQ 0/0:26:-0.05,-7.85,-84.74:78.04 --- > 1 10000345 . A . 106.54 . AN=2;DP=32;Dels=0.00;MQ=89.50;MQ0=0;SB=-10.00 GT:DP:GL:GQ 0/0:25:-0.09,-7.57,-75.74:74.78 > 1 10000346 . A . 103.75 . AN=2;DP=30;Dels=0.00;MQ=89.18;MQ0=0;SB=-10.00 GT:DP:GL:GQ 0/0:24:-0.07,-7.27,-76.00:71.99 > 1 10000347 . A . 109.79 . AN=2;DP=30;Dels=0.00;MQ=89.18;MQ0=0;SB=-10.00 GT:DP:GL:GQ 0/0:26:-0.05,-7.85,-84.74:78 Whether this is the expected change is up to you to decide, but the system makes it as easy as possible to see the consequences of your code change. The walker test framework supports an additional syntax for ensuring that a particular java Exception is thrown when a walker executes using a simple alternate version of the WalkerSpec object. Rather than specifying the MD5 of the result, you can provide a single subclass of Exception.class and the testing framework will ensure that when the walker runs an instance (class or subclass) of your expected exception is thrown. The system also flags if no exception is thrown. For example, the following code tests that the GATK can detect and error out when incompatible VCF and FASTA files are given: @Test public void fail8() { executeTest("hg18lex-v-b36", test(lexHG18, callsB36)); } private WalkerTest.WalkerTestSpec test(String ref, String vcf) { return new WalkerTest.WalkerTestSpec("-T VariantsToTable -M 10 -B:two,vcf " + vcf + " -F POS,CHROM -R " + ref + " -o %s", 1, UserException.IncompatibleSequenceDictionaries.class); } During the integration test this looks like: [junit] Executing test hg18lex-v-b36 with GATK arguments: -T VariantsToTable -M 10 -B:two,vcf /humgen/gsa-hpprojects/GATK/data/Validation_Data/lowpass.N3.chr1.raw.vcf -F POS,CHROM -R /humgen/gsa-hpprojects/GATK/data/Validation_Data/lexFasta/lex.hg18.fasta -o /tmp/walktest.tmp_param.05541601616101756852.tmp -l WARN -et NO_ET [junit] [junit] Wanted exception class org.broadinstitute.sting.utils.exceptions.UserException$IncompatibleSequenceDictionaries, saw class org.broadinstitute.sting.utils.exceptions.UserException$IncompatibleSequenceDictionaries [junit] => hg18lex-v-b36 PASSED Please do not put any extremely long tests in the regular ant build test target. We are currently splitting the system into fast and slow tests so that unit tests can be run in \< 3 minutes while saving a test target for long-running regression tests. More information on that will be posted. An expected MG5 string of "" means don't check for equality between the calculated and expected MD5s. Useful if you are just writing a new test and don't know the true output. Overload parameterize() { return true; } if you want the system to just run your calculations, not throw an error if your MD5s don't match, across all tests If your tests all of a sudden stop giving equality MD5s, you can just (1) look at the .tmp output files directly or (2) grab the printed GATK command-line options and explore what is happening. You can always run a GATK walker on the command line and then run md5sum on its output files to obtain, outside of the testing framework, the MD5 expected results. Don't worry about the duplication of lines in the output ; it's just an annoyance of having two global loggers. Eventually we'll bug fix this away.) { } }. Hi, I was wondering if there is a nice way to apply multiple processing steps to each variant (or a group of variants) as they are read so that the variant file is not read again and again. My understanding is that even if I use Queue, each script would read the vcf again. Is that correct? I was trying to use GATK ReadNameFilter as suggested on January 22nd in this thread: However, --help does not list a tool named ReadNameFilter and I get this error, when I try to use it: Am I going mad and/or is there some simple explanation? have twice run UnifiedGenotyper and the resultant .vcf file contains only part of chromosome 20. I do not see what I am doing wrong. Neither do the other two people in the lab who have extensive experience with GATKcat. -T CombineVariants -genotypeMergeOptions UNIQUIFY Is UNIQUIFY the default? I want every variant in the output file with each individual in the same column I have never heard of an "r-readable table". I used Google to search for it and got nothing helpful. In fact the page on this site is the first hit. It appears on this page in the "DepthOfCoverage specific arguments" table, under "Summary" for "--outputFormat". Could you please tell me what an "r-readable table" is? Thanks. Hi All, In my desparate attempts to learn more about how the GATK works, but since it's written in JAVA, which I have almost no clue about (python all the way), so to be honest I've not the faintest idea how to go about troubleshooting the below error. I found a blog post by the ever wonderful Pierre Lindebaum here that went through compiling and running you're first GATK walker. First using Pierre's example (after some compiling issues as there doesn't seem to be a ReadMetaDataTracker class anymore and I replace with RefMetaDataTracker) I get the following error: java -cp /path/to/GenomeAnalysisTK.jar:/path/to/cofoja-1.0-r139.jar:/path/to/HelloRead.jar org.broadinstitute.sting.gatk.CommandLineGATK -T HelloRead -I path/to/test.bam -R /path/to/human_g1k_v37.fasta ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR stack trace java.lang.NullPointerException at org.broadinstitute.sting.utils.classloader.JVMUtils.isAnonymous(JVMUtils.java:91) at org.broadinstitute.sting.utils.classloader.PluginManager.<init>(PluginManager.java:155) at org.broadinstitute.sting.utils.classloader.PluginManager.<init>(PluginManager.java:124) at org.broadinstitute.sting.gatk.WalkerManager.<init>(WalkerManager.java:55):57) at org.broadinstitute.sting.gatk.CommandLineGATK.main(CommandLineGATK.java:93) ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR A GATK RUNTIME ERROR has occurred (version 2.2-16-g9f648cb): ##### ------------------------------------------------------------------------------------------ I thought that was a very strange error and in Pierre's example he used GATK version 1.4 where as I am on 2.2. I also tried the following java -cp /path/to/cofoja-1.0-r139.jar:/path/to/HelloRead.jar -jar /path/to/GenomeAnalysisTK.jar -T HelloRead -I path/to/test.bam -R /path/to/human_g1k_v37.fasta ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR A USER ERROR has occurred (version 2.2-16-g9f648cb): #####: Invalid command line: Malformed walker argument: Could not find walker with name: HelloRead ##### ERROR ------------------------------------------------------------------------------------------ I have not tried downloading on older version of the GATK and running it with that. I'm pretty much stuck here. Any help is greatly appreciated. Cheers, Davy
https://www.broadinstitute.org/gatk/guide/tagged?tag=walkers
CC-MAIN-2015-40
en
refinedweb
- Author: - ageldama - Posted: - April 29, 2008 - Language: - Python - Version: - .96 - middleware error exception custom handling - Score: - 1 (after 1 ratings) <code> is_loaded = False if not is_loaded: is_loaded = True #... ExceptionHandlingMiddleware as EHM import views EHM.append(views.AuthFailError, views.auth_fail) # when AuthFailError thrown it redirects to auth_fail view function. </code> More like this - Sort Table Headers by insin 8 years, 2 months ago - I18n URLs via Middleware by zerok 8 years, 1 month ago - Extensible exception handling middleware by kcarnold 7 years, 6 months ago - Ajax error handling by kcarnold 7 years, 3 months ago - A action decorator for URLs by Batiste 7 years, 6 months ago If settings.DEBUG is True, it's natural to show the django native error page, I guess. # this is not for error page. this middleware is intend to use when you want exception handling as request forwearding :-) (like AOP) # Please login first before commenting.
https://djangosnippets.org/snippets/732/
CC-MAIN-2015-40
en
refinedweb
#include <db.h> int lock_stat(DB_ENV *env, DB_LOCK_STAT **statp); The lock lock region statistics are stored in a structure of type DB_LOCK_STAT. The following DB_LOCK_STAT fields will be filled in: The lock_stat function returns a non-zero error value on failure and 0 on success. The lock_stat function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the lock_stat function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/lock_stat.html
crawl-001
en
refinedweb
START LEARNING FLASH NOW Get instant access to over 45 minutes of FREE Flash tutorials on video and our newsletter. . In this tute I will attempt to do a rough explanation of the MouseListener Interface. Interfaces are very hard to explain.. they are like a plugin or an adapter to allow completely different objext to communicate. - these are its methods . public void mousePressed(MouseEvent e){} public void mouseClicked(MouseEvent e){} public void mouseEntered(MouseEvent e){} public void mouseExited(MouseEvent e){} public void mouseReleased(MouseEvent e){} Interface uses the keyword "implements" in the class declaration when it is being used You must implemnet all methods of an interface. You can leave empty any unNeeded methods. e.g. import java.awt.*; import java.awt.event.*; import java.applet.*; public class MouseTest extends Applet implements MouseListener{ int x, y ; String tracer; public void init(){ addMouseListener(this); setBackground(Color.red); } public void mousePressed(MouseEvent e){ x = e.getX() - 10; y = e.getY() - 10; tracer = " x = " + x + " y = " + y; // repaint the applet repaint(); } // dont use these so leave them empty public void mouseClicked(MouseEvent e){} public void mouseEntered(MouseEvent e){} public void mouseExited(MouseEvent e){} public void mouseReleased(MouseEvent e){} // paint method public void paint(Graphics g){ g.drawString(tracer, 100 , 100); g.drawOval(x,y,20,20); } } Flash Tutorials in Video Format - Watch them now at LearnFlash.com .
http://www.video-animation.com/java_015.shtml
crawl-001
en
refinedweb
#include <db.h> int DB->set_h_ffactor(DB *db, u_int32_t h_ffactor); Set the desired density within the hash table. The density is an approximation of the number of keys allowed to accumulate in any one bucket, determining when the hash table grows or shrinks. If you know the average sizes of the keys and data in your data set, setting the fill factor can enhance performance. A reasonable rule computing fill factor is to set it to the following: (pagesize - 32) / (average_key_size + average_data_size + 8) If no value is specified, the fill factor will be selected dynamically as pages are filled. The DB->set_h_ffactor interface may be used only to configure Berkeley DB before the DB->open interface is called. The DB->set_h_ffactor function returns a non-zero error value on failure and 0 on success. The DB->set_h_ffactor function may fail and return a non-zero error for the following conditions: Called after DB->open was called. The DB->set_h_ffactor function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->set_h_ffactor function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/db_set_h_ffactor.html
crawl-001
en
refinedweb
#include <db.h> int DB_ENV->set_flags(DB_ENV *dbenv, u_int32_t flags, int onoff); The flags value must be set to 0 or by bitwise inclusively OR'ing together one or more of the following values: If onoff is set to zero, the specified flags are cleared; otherwise they are set. The number of transactions potentially at risk is governed by how often the log is checkpointed (see db_checkpoint for more information) and how many log updates can fit into the log buffer. The DB_ENV->set_flags function returns a non-zero error value on failure and 0 on success. The database environment's flag values may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_flags", one or more whitespace characters, and the interface flag argument as a string; for example, "set_flags DB_TXN_NOSYNC". Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time. The DB_ENV->set_flags function may fail and return a non-zero error for the following conditions: The DB_ENV->set_flags function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB_ENV->set_flags function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/env_set_flags.html
crawl-001
en
refinedweb
#include <db.h> int DB->set_lorder(DB *db, int lorder); Set the byte order for integers in the stored database metadata. The number should represent the order as an integer; for example, big endian order is the value 4,321, and little endian order is the value 1,234. If lorder is not explicitly set, the host order of the machine where the Berkeley DB library was compiled is used. The value of lorder is ignored except when databases are being created. If a database already exists, the byte order it uses is determined when the database is opened. The access methods provide no guarantees about the byte ordering of the application data stored in the database, and applications are responsible for maintaining any necessary ordering. The DB->set_lorder interface may be used only to configure Berkeley DB before the DB->open interface is called. The DB->set_lorder function returns a non-zero error value on failure and 0 on success. The DB->set_lorder function may fail and return a non-zero error for the following conditions: Called after DB->open was called. The DB->set_lorder function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->set_lorder function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/db_set_lorder.html
crawl-001
en
refinedweb
#include <db.h> int DB_ENV->set_lg_dir(DB_ENV *dbenv, const char *dir); The path of a directory to be used as the location of logging files. Log files created by the Log Manager subsystem will be created in this directory. If no logging directory is specified, log files are created in the environment home directory. See Berkeley DB File Naming for more information. For the greatest degree of recoverability from system or application failure, database files and log files should be located on separate physical devices. The DB_ENV->set_lg_dir interface may be used only to configure Berkeley DB before the DB_ENV->open interface is called. The DB_ENV->set_lg_dir function returns a non-zero error value on failure and 0 on success. The database environment's logging directory may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_lg_dir", one or more whitespace characters, and the directory name. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time. The DB_ENV->set_lg_dir function may fail and return a non-zero error for the following conditions: Called after DB_ENV->open was called. The DB_ENV->set_lg_dir function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB_ENV->set_lg_dir function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/env_set_lg_dir.html
crawl-001
en
refinedweb
Agenda See also: IRC log <scribe> Scribe: Rhys <scribe> Scribenick: Rhys SW: Reviews the agenda and invites comments DO: Could add links to the updated versioning draft SW: I'll do that NM: Don has joined us as the AC rep from Web 3D. He's also been helping me with identification in virtual worlds ... There is no formal issue for this yet ... In IBM I look at new technologies appearing and point these out. There is a lot going on in the virtual worlds space, including starting businesses and signing up users. Some are talking about standards, but there is limited interoperabilty at the moment ... Any interop is in a very piecewise way ... Interop rather limited. Seems that starting with identification might be a good idea ... Suppose on a billboard, there is a URI, I could use the URI and access the information from the web ... In virtual worlds, could I click on a virtual URI on a virtual billboard and access a virtual view of the information TBL: So does the URI identify a point in 3 space NM: This is one of the questions ... I'd like to find out what the community is doing in this area, which is why Don is with us ... I've skimmed the note, but not gone through the detail ... The Web3d does seem to be doing something in this space, and I'd like to understand this ... W3C needs to have a relationship with the Web3D folks, and then we should discuss if we think that things are going in the right direction DO: A friend of mine does some second life development. He has a way of using Amazon gift tokens within second life. You can transfer these between people, and effectively move money. NM: Second life doesn't know its a URI though presumably? DO: Right NM: What do you want to identify within these virtual worlds? Is it just an item like a showroom, or is it a point in 3 space DB: There are a lot of issues in what has just been layed out. ... Backdrop is that Web3D is small but persistent. I've been coming to liason meetings since 2000. TBL: There was a 3D workshop at the first web conference DB: There are 40 company and 200 personal members. Been around for more than 10 years. Stable working group structure. Similar approach to W3C. Aligned with web architecture ... Industry players often think they can own 3D on the web. ... Familiar with second life. I need to be careful about IP. ... I am familiar with the approaches that such systems use ... Everything will eventually align with the Web, I believe TBL: There is some open source 3D stuff. How aligned is that with your standards? DB: There is lots of open source. We require two implementations, and one has to be open source. We have a total of 18 implementations NM: XJ3D is a set of code. What's its scope? DB: It implements the specs that we have for 3D on the web, geometry, spatial component, etc. ... We've just been working on globe-building code bases using the technology. ... We don't show a formal preference for open source, as there are company implementations too. NM: How extensive is the industry support DB: There is healthy churn in the industry. Efforts come and go. We could position ourselves as a virtual worlds technology. ... We have about 80% of the complete technology stack needed to deploy a complete stack, though that would be a mission change for us NM: One of the specs, is aimed at the virtual worlds or not? DB: Actually no, though it could work there ... The summary is that we've always had a URL field in objects that need network resources. We have an anchor, so that a URL can be activated ... It could be a bookmark, etc. We have a convention for these viewports. NM: Are these HTTP? DB: Yes. Actually, we had forgotten to update the specs to URI, but we have done that now. TBL: From this anchor, I follow a link to one of these, I get a view? DB: Typically we're just accessing some other 3D world view or a texture wrapped on something. Could be imported into the current world? TBL: Do you have some idea of level of detail as you zoom in? DB: Yes, based on distance of view from the object. The URIs could reference anything. TBL: You move to the new point, and then render whatever is there NM: I have a URI. I click on the object and activate the http URI and there could be a # sign and a viewport. ... What comes back has your media type, and that spec defines the rules for the fragid which you process and then render the result DB: We have a fixed coordinate system in the base, but others in extensions TBL: Suppose I have a 3d plugin in my browser, can I bookmark a coordinate? DB: Right now that would be a browser feature TBL: That would be a good point for interoperability DB: Its a good point. There is a potential tension between the various players. We sometimes get pushback if you save bookmarks and maybe e-mail them to someone else, the quesiton arises about what then you do with them TBL: Surely the place I am at is just a coordinate. NM: What fraction of the references tend to be symbolic, and what are 3 space coordinates? DB: More coordinates, but they are not in a common coordinate space ... There is no global reference other than the URI TBL: So if in my world im building a town, and I place your house at a particular place in my town. DB: And if you wanted to be in a particular lat/long location, you can do that. TBL: So if someone comes to my world and goes to your house, I should be able to bookmark your house. DB: Almost, because there is a limitation right now about moving from one coordinate system in an included item shared with the one in which it is embedded NM: I think there may be a slightly different use case here ... The web got the cosmic goal early. The Web3D approach seems to have been a collection of point solutions that interoperate TBL: Second life has a global 3D coordinate system. DB: There is actually a variety of SURLS. NM: A SURL is based on a region, and then has an x,y,z. So the island name is the base for the SURL SW: So the web3d approach is to use URIs to point out elements within the world. ... Anchors provide named view points. DB: And those anchors can include behaviour. NM: I think that the mechanisms are there in the Web. You have them too, but the statistics are different. On the Web, not being accessible is unusual. On Web3D, people have used the specs differently. ... The default is local serving rather than making them available. via the web s/avaialble/available/ DB: There has been an historic lack of consensus, that has meant that a common view of how to use URIs for this has no common definition yet. ... Our model is that anyone can have a 3D scene and point to parts of it TBL: Anyone with a 3D web browser could go directly from the current web straight into the 3D world. I think we are at the point where there could be value in enablihng that. ... The ideal could be that the virtual worlds links are used just like web links NM: We should identify a set of things that we should follow up. ... Tell us about the list of links DB: Our problem was that we often needed multiple alternate links for the same resource, for connectivity, performance, reliability reasons ... Typically you need a set of objects for a view. You'd normally use a relative link first, then other links ... They all point to the same thing, but ordered for performance. They are tried in order NM: Is there a URI that represents this list? DB: It's possible that people might do that, but its not inherent. The references are ordered lists of URIS SW: What do they look like? DB: They are a list of quoted strings NM: So in X3D you could have an attribute that includes a string as the list of URIs SW: So is the syntax attr='"uri1" "uri2" ...' DB: yes. SW: There are things in XML that accept lists of URIs NM: Yes, but in this case these are treated as what is effectively the same thing TBL: So you have to use this list each time you want to refer to a URI. Also the URI could refer to an entire view, so maybe you don't have to access these that often ... Typically on the web people would have a single URI that references the list. <Noah> Noodling on questions we might track: DB: So we also allow relative and absolute URIs and URNs. <Noah> * Are particular specs such as Web 3D specs using the Web and URIs in a good way? SW: Architecturally it would be good not to restrict the types of URI <Noah> * Is there a need for something resembling conneg, in which the same URI could sometimes resolve to a traditional Web resource (product demonstration on the traditional web) vs. 3D representation (product demonstration in virtual world) TBL: The classic approach would be to use this canonical URI approach. Actually, my laptop does this. It looks in the local filespace first, uses that if its there ... The other approach could be to use a catalog NM: I just typed into IRC a couple of questions. They are the things with the asterisks ... Are particular specs such as Web 3D specs using the Web and URIs in a good way? ... that's the kind of thing we could look at <timbl> A catalog is a chunk of metadata which gives for each canonical URI a set of equivalent URIs to try, or a set of rewriting rules for URI patterns. DB: Web 3d would like to engage in that discussion NM: Is there a need for something resembling conneg, in which the same URI could sometimes resolve to a traditional Web resource (product demonstration on the traditional web) vs. 3D representation (product demonstration in virtual world) ... If I have a product specification, I may send you a different representation if you are on a cell phone than if you are on a desktop ... Maybe I could give you a 3D representation if I knew you were in a virtual world ... Maybe it just needs a redirection. The question is whether anyone cares about the use case. I don't think anyone is doing this yet TBL: It should just work shouldn't it? NM: In Web3D this looks promising, but in other worlds, there doesn't seem to be this capability ... The issue is about virtual worlds in general. So part of this is that the Web3D approach looks like it could work, but for other players this might not be the case DB: Also, would the same facilities be available via other mechanisms too, like web services catalogues etc. SW: I'd like to understand things that need to be addressed by the TAG NM: I thought that TAG has a role to liase with communities beyond the web, I could be wrong ... Is it in scope? If so, what is its priority? <Noah> Mission statement . TBL: I'd like to encourage the Web3D community to develop a technology that could cross link via HTTP and in which there are bookmarkable links by coordinate ... to enable references that can be shared and can be used from anywhere in the Web ... I think it could be really valuable and that it would create interesting markets ... Decentralised worlds and decentralised development to encourage scalability <Noah> FWIW: I think the use case where the same http link sometimes gives a representation in the form of an HTML product description for desktop, sometimes resolves to one tailored to cell phone, and sometimes gives you a 3D representation (if you've got the necessary client), is very, very important. I think it's a good use of the TAG's time in principle to keep an eye on that, but I'm not strongly saying higher priority than other things we're trying to do. <dorchard> right, I wanted to ask about URIs for things. DB: I'd like to talk about this with our partners. DO: How about URIs for things rather than places. That's the httpRange-14 question in virtual worlds ... Could have a URI for a real thing that could be referenced in a virtual world. TBL: For lots of things they have this. For example, if the representation is a bunch of polygons <Noah> I'm not sure I'm looking forward to discussing the question: "Is there anything about the essence of an avatar that cannot be conveyed in a computer message?" TBL: Sometimes the refrence will be to an instance, sometimes to a 'class'. By class I mean a design for an object, from which instances can be created at specific 3 space coordinate Discussion ensues about the relationship between instances and models (classes), where they are, how they might be transferred and how ownership might be transferred... <timbl> The instance of an object needs an IS so we can say things about it like who owns it, whether it s for sale, and so on <dorchard> I mean can we have a URI for a thing in the real world <dorchard> then we transfer the "thing" from place to place NM: Resources have URIs, but representations tend not to. <dorchard> Or even link to thing in the real world, for transfer, etc. TBL: I don't think there is a philosophical issue here. It's just like images. It's just that in 3D its a set of polygons DB: We do worry about the ownership of the model when it is being used in multiple scenes ... We support specific protocols. One problem has been a lack of network capability in ECMAScript. There are other approaches including Ajax for 3D worlds. There are other approaches, but networking is tricky TBL: What is the relationship with the distributed 3D communities? DB: complete mixture of overlaps and alternative architectures ... We continue to try and apply the web approach to distributed 3D simulation TBL: One of the ways this might go is that there are large databases of interesting information that is geographically related. ... There is increasing interest in streaming this kind of data across networks so that multiple people can see views of changes immediately ... We might end up with streams of changes flowing over the Web. This is roughly the same challenge as might be needed to keep views consistent across 3D worlds <dorchard> I think I heard that the polygons of an object have URI(s) but not the thinng. <dorchard> Don: common thing is to worry about multi-user representation <dorchard> Don: not so much a URI for the thing. DB: We don't have an equivalent of DRM. We will use EXI when it becomes available. SW: What is the follow up? NM: I think that there is a lot of interest for me. I think this could be a big deal because its similar to the cell phone situation. The key is that you can call land lines from cell phones ... There is an analog here. I don't think it's urgent yet. I think it will be. There is fragmentation now, which it will be important to fix over time. It's equivalent to a situation where cell phones could not call land lines. ... Not sure we need to open an issue right now. SW: Reviews agenda progress TAG thanks Don for participating Don Brutzman leaves DO: changed the first good practice to 'server or resource' from server NM: Should include the notion of the representation <dorchard> Server, or more specifically the representations served such as forms, should not solicit ... <dorchard> Server, or more specifically the representations served such as forms, should not solicit ... <Noah> suggest s/served/it serves/ <dorchard> A server, or more specifically the representations it serves such as forms, should not solicit any passwords in clear text. <Stuart> <dorchard> fugged about it, back to A server should not solicit any passwords in clear text. <dorchard> A server should not solicit any passwords in clear text. <dorchard> A server should not solicit any passwords in clear text. DO: The paragraph about warning the end user has been removed. I've added text to describe why a good practice about this is not possible TBL: I think there is a case for having a 'mode' of javascript that is hampered to prevent unsafer operations <dorchard> Noah: don't do paragraph break <dorchard> Noah: don't say "can provide", say "provides" NM: Let me clarify this. If I don't have network access, for example, why would I ask for a password? TBL: If you don't have javascript that could make the request, you would have to use a form, for example when the padlock is on NM: This doesn't cover the situation for other sensitive data. Should this feature that Tim is advocating cover more than just passwords ... Lots of people won't understand the issue. People will simply assume that if the site they are using is reputable that they will be doing something reasonable with a password. ... You could imagine defining more field types in, say HTML, where these could reflect additional uses. This sounds like its beyond what we want to design in this finding DO: I removed the problematic good practice guide about warning users when sending a password in the clear. <dorchard> <dorchard> Timbl: note that it's dangerous to send sensitive information such as passwords in clear text, there's no obvious method by which a web browser can reliably know when the data entered is sensitive. <dorchard> furthermore, in browsers which enable scripting, it may be impossible to know whether the information is transmitted in clear text. <dorchard> (and the same for versioning...) NM: Begs the question about the password type in HTML. We need to explain that DO: Can we get some words? NM: Points out that the HTML spec says that type=password can be used for sensitive information such as passwords DO: I added text for digest authentication from Hal. NM: There was a part of the finding that encouraged the use of digest, but actually because of this issue of salted hashes, you actually can't do this. ... You can't get the secrets to the correct place to enable use of digests in some use cases. These were the cases that Hal thought were the majority, hence there was no value in using digest. ... I think there may be cases where it can be used, in some kinds of newly written web applications ... Suggest that we add something about issues with digest where passwords are already stored as salted hashes TBL: I thought that the problem was about accessing the secret key. ... suggest change to 'Because many systems store passwords as salted hashes....' <dorchard> NM: change "because most passwords are stored.." to "Because many systems store passwords" DO: Next, in section 3, the good practice changed to should from must. Added an explanation about circumstances under which masking is not required NM: Could we change the 'It is the TAG's opinion...' because its not our normal phrasing. <dorchard> NM: s/It is the TAG's opinion that if the form field is a password, password masking must take "/If the form field is a password, password masking should take <dorchard> nm: s/is displayed/be displayed <dorchard> nm: s/or the password/or that the password/ <dorchard> One example is that the uswer may request that the password be displayed in the clear in order to check the password as it is being entered. <dorchard> Another example is that the password is intended to prevent search engine access and so it is not particularly sensitive. <dorchard> Another example is a password that is intended to prevent search engine access and so it is not particularly sensitive. <dorchard> Another example is that the password is intended to prevent search engine access and so the password is not particularly sensitive. Another example is a password intended merely to prevent search engine access, and which consequently is not particularly sensitive TBL: Do we cover temporal versioning and exensibility, or just the temporal aspect? NM: I don't want to lose the sense of extensibility <dorchard> <dorchard> nm: change because evolution to "because support for evolution" DO: Any objections up to section 1.1? <Stuart> suggest for 3. s/...., then a given language version/....then a given language version specification/ Discussion ensues on the text of list item #3 in section 1, and whether it requires clarification. scribe: should define a set of future version identifiers that will be considered compatible. This set could, of course, be empty. RL notes that the previous line was his attempt to help a discussion, but was not discussed Discussing section 1.1 <dorchard> nm: section 1.1, bullet 2: common is unclear, name structure should be something like person name structure SW: Is there a section elsewhere where the name example is discussed? ... Could you link to it rather than explaining it here? DO: Yes <dorchard> NM: #3 change "schemas" to languages. NM: In point 3, could we use langauage instead of schema <dorchard> all three of thouse languages employ markup from the same namespace but they are different languages. <dorchard> tbl: sentences shouldn't start with "And.." <dorchard> tbl: change to separate bullet, #4. <dorchard> nm: kill last paragraph. <dorchard> nm: swap last 2 sentences. <dorchard> nm: change "those applications" to "the applications using it" DO: Any other other changes for 1.1? None raised Section 1.2 <dorchard> NM: in some languages, each instance contains just a name. DO: Maybe I need to remove the first bullet ("Just Names") SW: I might choose a URI as the example. DO: I did mean the abstract, when I first wrote it. I need to think whether I want to extend the notion of language to deal with this kind of abstraction NM: Please don't. We started with texts because these are things that can be exchanged across the Web <dorchard> I need to remove the abstract names from this.. NM: I thought you were building this up from the simplest case, where there is just a name DO: Ok, I'll craft something SW: Points out that we are runing out of time. <dorchard> nm: move non-markup text before markup NM: Any reason not to put text languages ahead of markup? If we did it would be in the sequence of increasing elaboration. <dorchard> put link to versioning xml DO: Have we got to the end of section 1.2? NM: Not sure about the bullet on binary. We said that we were talking about languages composed of texts <dorchard> I'd prefer to include gif, jpeg in our versioning strategy <dorchard> while our formal definition of language restricts itself to text based language, the advice herein may be useful for binary languages such as gif, jpeg. DO: We could move the GIF and JPEG to a section that mentions non-text languages to which many of the findings in the documnent apply. ... What about binary coded XML SW: There is some layering going on here. NM: Dave wants to be able to deal with abstract trees, for example. I think we chose not to do that in Edinburgh SW: I'm not sure we did say that. NM: We could change the definition of languages or we could restrict this to text-based languages <RhysL> TBL: We're not covering APIs <RhysL> TBL: I would be loath to talk about versioning and APIs <RhysL> DO: Serialisations are languages, but they have an affect on the API <dorchard> tbl: can relax definitions to be sequence of characters or bytes <dorchard> nm: need to be on bits <RhysL> DO: I could live with Tim's suggestion of defining a text as a sequence of characters or bits <RhysL> TBL: Let's just leave it for now and work on text for the definition of texts in languages <dorchard> How about changing text to Text is a sequence of characters or bits <RhysL> NM: I think we need to re-read the terminology section, to check that there are no additional issues that are caused <dorchard> tbl: happy down to 2 Versioning Strategies <Stuart> rrsagent pointer? <Stuart> rrsagent pointer <Stuart> logger, pointer <Stuart> logger, pointer?
http://www.w3.org/2001/tag/2007/11/09-minutes
crawl-001
en
refinedweb
Websphere 6.1.x is IBM's application server offering. The latest release is 6.1.0.13 which does not have EJB3 or JEE5 support. There is a recently released (Nov 07) EJB3 feature pack which provides some support for EJB3 and JPA. Currently there is no true JEE5 offering from IBM. This causes some issues with Seam integration with applications that use EJB3. First we will go over some basic information about the Websphere environment that we used for these examples. After a good deal of research and work we were able to get EJB3 applications to function correctly. We will go over the details of those steps with the jee5 example. We will also deploy the the JPA example application. Websphere is a commercial product and so we will not discuss the details of its installation other than to say follow the directions provided by your particular installation type and license. This section will detail the exact server versions used, installation tips, and some custom properties that are needed for all of the examples. All of the examples and information in this chapter are based on the the latest version of Websphere at the time of this writing. Websphere Application Server 6.1.0.13 Feature Pack for EJB 3.0 for Websphere Application Server V6.1 (3.0.6.1.0.13) The EJB3 feature pack that we installed came with the 6.1.0.13 patch version of Websphere. Installing the feature pack does not ensure that your server will have the proper environment for EJB3 applications. Be sure that as part of the installation of the feature pack you follow the instructions to create a new server profile with the EJB3 feature pack enabled, or augment one of your existing ones. This can also be done after the installation by running the profile managment tool. There are times that restarting the server will be required after deploying or changes the examples in this chapter. Its does not seem like every change requires a restart. If you get errors or exceptions after modifing a property or deploying an application try to restart the server. There are a couple of Websphere custom properties that are required for Seam integration. These properties are not needed specifically for Seam, but work around some issues with Websphere. These are set following the instructions here : Setting web container custom properties prependSlashToResource = "true" — This solves a fairly common issue with Websphere where applications are not using a leading "/" when attempting to access resources. If this is not set then a java.net.MalformedURLException will be thrown. With this property set you will still see warnings, but the resources will be retrieved as expected. SRVE0238E: Resource paths must have a leading slash com.ibm.ws.webcontainer.invokefilterscompatibility = "true" — This solves an issue with Websphere where it throws a FileNotFoundException when a web application attempts to access a file resource that does not actually exist on disk. This is a common practice in modern web applications where filters or servlets are used to process resource requests like these. This issue manifests itself as failures to retrieve JavaScript, CSS, images, etc... when requesting a web page. PK33090; 6.1: A filter that serves a file does not pop-up an alert message The jee5/booking example is based on the Hotel Booking example (which runs on JBoss AS). Out of the box it is designed to run on Glassfish, but with the steps below it can be deployed to Websphere. It is located in the $SEAM_DIST/examples/jee5/booking directory. As stated before the EJB3 feature pack does not provide a full jee5 implementation. This means that there are some tricks to getting an application deployed and functioning. Below are the configuration file changes that are need to the base example. We need to change the way that we look up EJBs for Websphere. We need to remove the /local from the end of the jndi-pattern attribute. It should look like this: <core:init This is the first place that we notice an unexpected change because this is not full jee5 implementation. Websphere does not support Servlet 2.5, it requires Servlet 2.4. For this change we need to adjust the top of the web.xml file to look like the following: <?xml version="1.0" encoding="UTF-8"?> <web-app Next we have to make some changes to the EJB references in the web.xml. These changes are what will allow Websphere to bind the EJB2 references in the web module to the the actual EJB3 beans in the EAR module. Replace all of the ejb-local-refs when the values below. <!-- JEE5 EJB3 names --> > <ejb-local-ref> <ejb-ref-name>jboss-seam-jee5/BookingListAction</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.example.booking.BookingList</local> </ejb-local-ref> <ejb-local-ref> <ejb-ref-name>jboss-seam-jee5/RegisterAction</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.example.booking.Register</local> </ejb-local-ref> <ejb-local-ref> <ejb-ref-name>jboss-seam-jee5/ChangePasswordAction</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.example.booking.ChangePassword</local> </ejb-local-ref> <ejb-local-ref> <ejb-ref-name>jboss-seam-jee5/HotelBookingAction</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.example.booking.HotelBooking</local> </ejb-local-ref> <ejb-local-ref> <ejb-ref-name>jboss-seam-jee5/HotelSearchingAction</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.example.booking.HotelSAll of the examples and informaearching</local> </ejb-local-ref> <ejb-local-ref> <ejb-ref-name>jboss-seam-jee5/EjbSynchronizations</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.transaction.LocalEjbSynchronizations</local> </ejb-local-ref> The important change is that there is an empty local-home element for each EJB. This tells Websphere to make the correct bindings between the web module and the EJB3 beans. The ejb-link element is simply not used. For this example we will be using the default datasource that comes with Websphere. To do this change the jta-data-source element: <jta-data-source>DefaultDatasource</jta-data-source> Then we need to adjust some of the hibernate properties. First comment out the Glassfish properties. Next you need to add/change the properties: <!--<property name="hibernate.transaction.flush_before_completion" value="true"/>--> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider"/> <property name="hibernate.dialect" value="GlassfishDerbyDialect"/> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.WebSphereExtendedJTATransactionLookup"/> hibernate.transaction.manager_lookup_class — Standard Hibernate transaction manager property for Websphere 6.X hibernate.transaction.flush_before_completion — This is commented out because we want the container to manage the transactions. Also if this is set to true an exception will be thrown by Websphere when the EJBContext is looked up. com.ibm.wsspi.injectionengine.InjectionException: EJBContext may only be looked up by or injected into an EJB hibernate.dialect — From WAS 6.1.0.9 on the embedded DB was switched to the same Derby DB in Glassfish. You will need to get the GlassfishDerbyDialect.class and copy it into the /resources directory. The class exists in the JPA example and can be copied using the command below assuming you are in jee5/booking directory: cp ../../jpa/resources-websphere61/WEB-INF/classes/GlassfishDerbyDialect.class ./resources This class will be put into the jboss-seam-jee5.jar file using changes to the build.xml discussed later. This file must also be copied from the JPA example because either the Derby DB or the dialect does not support changes to the ID column. The files are identical except for the column difference. Use the following command to make the copy cp ../../jpa/resources-websphere61/import.sql ./resources In order to get the changes we have made into our application we need to make some changes to the build.xml. There are also some additional jars that are required by our application in order to work with Websphere. This section will cover what changes are needed to the build.xml. JSF libraries — Websphere 6.1 comes with their own version of JSF 1.1 (Seam requires JSF 1.2). So we must add these jars to our application. jsf-api.jar jsf-impl.jar Since Websphere is not a fully compliant JEE5 implementation we need to add these EL libraries: el-api.jar el-ri.jar jboss-seam.jar — for some reason when deploying the application through the Websphere administration console it can not find the jboss-seam.jar at the base of the EAR archive. This means that we need to add it to the /lib of the EAR. Finally we remove the log4j.jar so that all of the log output from our application will be added to the Websphere log. Additional steps are required to fully configure log4j and those are outside of the scope of this document. Add the following entry to the bottom of the build.xml file. This overrides the default fileset that is used to populate the jboss-seam-jee5.jar. The primary change is the addition of the GlassfishDerbyDialect.class: <fileset id="jar.resources" dir="${resources.dir}"> <include name="import.sql" /> <include name="seam.properties" /> <include name="GlassfishDerbyDialect.class" /> <include name="META-INF/persistence.xml" /> <include name="META-INF/ejb-jar.xml" /> </fileset> Next we need to add the library dependencies discussed above. For this add the following to bottom of the ear.lib.extras fileset entry: <!--<include name="lib/log4j.jar" />--> <include name="lib/el-api.jar" /> <include name="examples/jpa/lib/el-ri.jar" /> <include name="lib/jsf-api.jar" /> <include name="lib/jsf-impl.jar" /> <include name="lib/jboss-seam.jar" /> </fileset> Now all that is left is to execute the ant archive task and the built application will be in the jee5/booking/dist directory. So now we have everything we need in place. All that is left is to deploy it - just a few steps more. For this we will use Websphere's administration console. As before there are some tricks and tips that must be followed. The steps below are for the Websphere version stated above, yours may be slightly different. Log in to the administration console Access the Enterprise Application menu option under the Applications top menu. At the top of the Enterprise Application table select Install. Below are installation wizard pages and what needs to done on each: Preparing for the application installation Browse to the examples/jee5/booking/dist/jboss-seam-jee5.ear file using the file upload widget. Select the Next button. Select installation options Select the Deploy enterprise beans check box. This is needed unless you used a Websphere tool to package the application. Select the Next button. Map modules to servers No changes needed here as we only have one server. Select the Next button. Map EJB references to beans This page will list all of the beans that we entered in the web.xml. Make sure that Allow EJB reference targets to resolve automatically check box is selected. This will tell Websphere to bind our EJB3 beans to the EJB references in the web module. Select the Next button. Map virtual hosts for Web modules No changes needed here. Select the Next button. Summary No changes needed here. Select the Finish button. Installation Now you will see it installing and deploying your application. When if finishes select the Save link and you will be returned to the Enterprise Applications table. Now that we have our application installed we need to make some adjustments to it before we can start it: Starting from the Enterprise Applications table select the Seam Booking link. Select the Manage Modules link. Select the jboss-seam-jee5.war link. Change the Class loader order combo box to Classes loaded with application class loader first. Select Apply and then Save options. Return the Seam Booking page. On this page select the Class loading and update detection link. Select the radio button for Classes loaded with application class loader first. Even though we are not enabling class reload you must also enter a valid number in the Polling interval for updated files text area (zero works fine). Select Apply and then Save options. You should verify that the change you just made has been remembered. We have had problems with the last class loader change not taking effect - even after a restart. If the change did not take you will need to do it manually, following these directions: Open the following file in a text editor of your choice: $WebSphereInstall/$yourServerName/profiles/$yourProfileName/config/cells/ $yourCellName/applications/Seam Booking.ear/deployments/ Seam Booking/deployment.xml Modify the following line so that PARENT_FIRST is now PARENT_LAST: <classloader xmi: Save the file and now when go to the Class loading and update detection page you should see Classes loaded with application class loader first selected. To start the application return to the Enterprise Applications table and select our application in the list. Then choose the Start button at the top of the table. You can now access the application at . The default timeout period for a Websphere 6.1 Stateful EJB is 10 minutes. This means that you may see some EJB timeout exceptions after some idle time. It is possible to adjust the timeout of the Stateful EJBs on an individual basis, but that is beyond the scope of this document. See the Websphere documentation for details. Thankfully getting the jpa example to work is much easier than the jee5 example. This is the Hotel Booking example implemented in Seam POJOs and using Hibernate JPA with JPA transactions. It does not require EJB3 support to run.61 This will create container specific distribution and exploded archive directories with the websphere61 label. This is similar to the jee5 example at Section 28.2.3, “Deploying the application to Websphere”, but without so many steps. From the Enterprise Applications table select the Install button. Preparing for the application installation Browse to the examples/jpa/dist-websphere61/jboss-seam-jpa.war file using the file upload widget. In the Context root text box enter jboss-seam-jpa. Select the Next button. Select the Next button for the next three pages, no changes are needed. Summary page Review the settings if you wish and select the Finish button to install the application. When installation finished select the Save link and you will be returned to the Enterprise Applications table. As with the jee5 example there are some class loader changes needed before we start the application. Follow the instructions at installation adjustments for jee5 example but exchange jboss-seam-jpa for Seam Booking. Finally start the application by selecting it in the Enterprise Applications table and clicking the Start button. You can now access the application at the. The differences between the JPA examples that deploys to JBoss 4.2 and Websphere 6.1 are mostly expected; library and configuration file changes. Configuration file changes WEB-INF/web.xml — the only significant change is that Websphere 6.1 only support Servlet 2.4 so the top of this file was changed. META-INF/persistence.xml — the main changes here are for the datasource JNDI path, switching to the Websphere 6.1 transaction manager look up. Changes for dependent libraries WEB-INF/lib — The Websphere version requires several library packages because they are not included as they are with JBoss AS. These are primarily for hibernate, JSF-RI support and their dependencies. Below are listed only the additional jars needed above and beyond the JBoss JPA example. To use Hibernate as your JPA provider you need the following jars: Seam requires JSF 1.2 and these are the jars needed for that. Websphere 6.1 ships with its own implementation of JSF 1.1. Various third party jars that Websphere Websphere. As stated above in Section 28.2, “ The jee5/booking example ” there are some tricky changes needed to get an EJB3 application running. This section will take you through the exact steps.] websphere_example [echo] Accepted project name as: websphere_example [input] Do you want to use ICEFaces instead of RichFaces [n] (y, [n], ) [input] skipping input as property icefaces.home.new has already been set. [input] Select a RichFaces skin .websphere_example] [com.mydomain.websphere_example] org.jboss.seam.tutorial.websphere.action [input] Enter the Java package name for your entity beans [org.jboss.seam.tutorial.websphere.action] [org.jboss.seam.tutorial.websphere.action] org.jboss.seam.tutorial.websphere.model [input] Enter the Java package name for your test cases [org.jboss.seam.tutorial.websphere.action.test] [org.jboss.seam.tutorial.websphere.action.test] org.jboss.seam.tutorial.websphere: /rhdev/projects/jboss-seam/svn-seam_2_0/jboss-seam-2_0/seam-gen/build.properties [echo] Installing JDBC driver jar to JBoss server [copy] Copying 1 file to /home/jbalunas/jboss/jboss-4.2.2.GA/server/default/lib [echo] Type 'seam create-project' to create the new project BUILD SUCCESSFUL Total time: 3 minutes 5 seconds Type ./seam new-project to create your project and cd /home/jbalunas/workspace/websphere_example to the newly created structure. We now need to make some changes to the generated project. Alter the jta-data-source to be DefaultDatasource. We are going to be using the integrated Websphere DB. Add or change the properties below. These are described in detail at Section 28.WebSphereExtendedJTATransactionLookup"/> Remove the JBoss AS specific method of exposing the EntityManagerFactory: <property name="jboss.entity.manager.factory.jndi.name" value="java:/websphere_exampleEntityManagerFactory"> You'll need to alter persistence-prod.xml as well if you want to deploy to Websphere using the prod profile. As with other examples we need to include this class for DB support. It can be copied from the jpa example into the websphere_example/resources directory. cp $SEAM/examples/jpa/resources-websphere61/WEB-INF/classes/GlassfishDerbyDialect.class ./resources, we are using Websphere's default datasource) Enable container managed transaction integration - add the <transaction:ejb-transaction /> component, and it's namespace declaration xmlns:transaction="" Alter the jndi-pattern to java:comp/env/websphere_example/#{ejbName} We do not need managed-persistence-context for this example and so can delete its entry. <persistence:managed-persistence-context Websphere does not support Servlet 2.5, it required Servlet 2.4. For this change we need to adjust the top of the web.xml file to look like the following: <?xml version="1.0" encoding="UTF-8"?> <web-app As with the jee5/booking example we need to add EJB references to the web.xml. These references require the empty local-home to flag them for Websphere to perform the proper binding. <ejb-local-ref> <ejb-ref-name>websphere_example/AuthenticatorAction</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local-home></local-home> <local>org.jboss.seam.tutorial.websphere.action.Authenticator</local> </ejb-local-ref> <ejb-local-ref> <ejb-ref-name>websphere are good to go. This application has similar requirements as the jee5/booking example. Change the default target to archive (we aren't going to cover automatic deployment to Websphere). <project name="websphere_example" default="archive" basedir="."> Websphere looks for the drools /security.drl file in the root of the war file instead of the root of the websphere ge Websphere dependencies. You will need to copy the el-ri.jar from the $SEAM/examples/jpa/lib directory. <!-- jsf libs --> <include name="lib/jsf-api.jar" /> <include name="lib/jsf-impl.jar" /> <include name="lib/el-api.jar" /> <include name="lib/el-ri.jar"/> Third party dependencies. You will need to copy the jboss-archive-browsing.jar from the $SEAM/examples/jpa/lib directory into the the projects /lib directory. You will also need to acquire the concurrent.jar and place it in the same directory. You can get this from any jboss distribution or just search for it. <!--" /> jboss-seam.jar - this is needed in both the ear base and /lib directory. <!-- seam jar --> <include name="lib/jboss-seam" /> <!-- seam jar --> <include name="lib/jboss-seam.jar" /> </fileset> Build your application by calling ant in the base directory of your project (ex. /home/jbalunas/workspace/websphere_example ). The target of the build will be dist/websphere_example.ear . To deploy the application follow the instructions here : Section 28.2.3, “Deploying the application to Websphere” but use references to this project websphere_example instead of jboss-seam-jee5.
http://docs.jboss.com/seam/2.0.1.GA/reference/en/html/websphere.html
crawl-001
en
refinedweb
Interaction plays a key role in shaping the experience a user has on an application. Animations help define these interactions since the user's eyes tend to pay attention to moving objects. These catchy and moving elements tell a story that helps the application differentiate from competitors and bring a better user experience. Creating animations can be daunting, especially programming and handling orchestrations (how these coordinate between each other). Thankfully, amazing people have created abstractions in libraries that allow the developer to create seamless, hardware-accelerated animations efficiently. In this post, I will give an introduction to Framer Motion and create simple animations with it. We'll be learning about motion components, orchestration, dragging, and automatic animations. React Animation Libraries In React, we have two main animation libraries: React Spring and Framer motion. I like both of them, but I believe each one has a use case. React Spring is a spring-physics based animation library. These animations emulate real spring physics for smooth animations. It is really powerful and flexible. Almost all properties of HTML tags can be fully animated with React Spring. This is especially important for complex and SVG animations, however, Its main disadvantage is its high learning curve. Framer Motion is a motion library. It is easy to learn and powerful with orchestrations. Contrary to React Spring, it has more types of animations: spring, tween, and inertia. Tween represent duration based animations like CSS, and inertia decelerates a value based on its initial velocity, usually used to implement inertial scrolling. Framer Motion is perfect for handling animations on 99% of sites. Its main disadvantage is its lack of documentation and some properties won't work for SVG animations. Choosing between these libraries depends greatly on what you are building and how much you are willing to devote to learning animations. React Spring can do all that Framer Motion does with more flexibility, but it is harder to read and understand. I recommend it for custom, complex animations especially for SVG and 3D (Three.js). For most websites, Framer Motion is better since it can handle most common cases and its learning curve is really low compared to React Spring. Also, its way of handling animations is more intuitive and declarative. This is why we'll focus on this library and learn about animations with it. The fundamentals of Framer Motion will be transferable to React Spring, however its syntax will be more abstract. How It Works: Motion components Framer motion core API is the motion component. There's a motion component for every HTML and SVG element. They work exactly the same as their HTML counterparts but have extra props that declaratively allow adding animations and gestures. Think of motion component as a big JavaScript object that can be used to access all HTML elements. Here are some ways one would call a motion component: <motion.div /> <motion.span /> <motion.h1 /> <motion.svg /> ... As said before, they allow for extra props. Some of the most used are: initialdefines the initial state of an element. styledefines style properties just like normal React elements, but any change in the values through motion values (values that track the state and the velocity of the component) will be animated. animatedefines the animation on the component mount. If its values are different from styleor initial, it will automatically animate these values. To disable mount animations initialhas to be set to false. exitdefines the animation when the component unmounts. This works only when the component is a child of the <AnimatePresence />component. transitionallows us to change animation properties. Here one can modify the duration, easing, type of animation (spring, tween, and inertia), duration, and many other properties. variantsallows orchestrating animations between components. Now that we know the basic props that motion can contain and how to declare them, we can proceed to create a simple animation. Mount Animations Let's say we want to create an element that on mount will fade in down. We would use the initial and animate prop. Inside the initial property, we'll be declaring where the component should be located before it mounts. We'll be adding an opacity: 0 and y: -50. This means the component initially will be hidden and will be 50 pixels up from its location. In the animate prop, we have to declare how the component should look when it is mounted or shown to the user. We want it to be visible and located on its initial position, so we'll add an opacity: 1 and y: 0. Framer Motion will automatically detect that the initial prop has a different value from the animate, and animate any difference in properties. Our snippet will look like this: import { motion } from "framer-motion" <motion.div initial={{ opacity: 0, y: -50 }} animate={{ opacity: 1, y: 0 }} > Hello World! </motion.div> This will create the following animation: Congratulations on creating your first animation with Framer Motion! 💡 You may have noticed that styles are missing from the snippets above. Some of the snippets will lack the styles in order to focus on Framer Motion's functionality. 📕 All animations in this post will have their own Storybook. Here is the link of the latter. Unmount Animations Unmounting or exit animations are crucial when creating dynamic UIs, especially when deleting an item or handling page transitions. To handle exit animations in Framer Motions, the first step is to wrap the element or elements in an <AnimatePresence/>. This has to be done because: - There is no lifecycle method that communicates when a component is going to be unmounted - There is no way to defer unmounting until an animation is complete. Animate presence handles all this automatically for us. Once the elements are wrapped, they must be given an exit prop specifying their new state. Just like animate detects a difference in values in initial, exit will detect the changes in animate and animate them accordingly. Let's put this into practice! If we take the previous component and were to add an exit animation. We want it to exit with the same properties as it did in initial import { motion } from "framer-motion" <motion.div exit={{ opacity: 0, y: -50 }} initial={{ opacity: 0, y: -50 }} animate={{ opacity: 1, y: 0 }} > Hello World! </motion.div> Now, let's add a <AnimatePresence/> so it can detect when our component unmounts: import { motion } from "framer-motion" <AnimatePresence> <motion.div exit={{ opacity: 0, y: -50 }} initial={{ opacity: 0, y: -50 }} animate={{ opacity: 1, y: 0 }} > Hello World! </motion.div> </AnimatePresence> Let's see what happens when the component unmounts: Orchestration One of Framer Motion's strong suits is its ability to orchestrate different elements through variants. Variants are target objects for simple, single-component animations. These can propagate animations through the DOM, and through this allow orchestration of elements. Variants are passed into motion components through the variants prop. They normally will look like this: const variants = { visible: { opacity: 0, y: -50 }, hidden: { opacity: 1, y: 0 }, } <motion.div initial="hidden" animate="visible" variants={variants} /> These will create the same animation as we did above. You may notice we passed to initial and animate a string. This is strictly used for variants. It tells what keys Framer Motion should look for inside the variants object. For the initial, it will look for 'hidden' and for animate 'visible'. The benefit of using this syntax is that when the motion component has children, changes in the variant will flow down through the component hierarchy. It will continue to flow down until a child component has its own animate property. Let's put this into practice! This time we will create a staggering list. Like this: In the image, each item has an increasing delay between each other's entrance. The first one will enter in 0 seconds, the second in 0.1 seconds, the third in 0.2, and it will keep increasing by 0.1. To achieve this through variants, first, let's create a variants object where we'll store all possible states and transition options: const variants = { container: { }, card: { } }; variants.container and variants.card represent each motion component we'll have. Let's create the animations for the cards. We see that the cards go from left to right while fading in. This means we have to update its x position and opacity. As aforementioned, variants can have different keys for their animation states, however, we will leave it as initial and animate to indicate before mount and after mount, respectively. On initial, our component will be 50 pixels to the left and its opacity will be 0. On animate, our component will be 0 pixels to the left and its opacity will be 1. Like this: const variants = { container: { }, card: { initial: { opacity: 0, x: -50 }, animate: { opacity: 1, x: 0 } } }; Next, we have to add the stagger effect to each of these cards. To achieve this we have to add the container.transition property which allows us to update the behavior of our animation. Inside the property, we'll add a staggerChildren property that defines an incremental delay between the animation of the children. const variants = { container: { animate: { transition: { staggerChildren: 0.1 } } }, card: { initial: { opacity: 0, x: -50 }, animate: { opacity: 1, x: 0 } } }; Now, if we hook this variant to the motion components: import { motion } from "framer-motion"; const variants = { container: { animate: { transition: { staggerChildren: 0.1 } } }, card: { initial: { opacity: 0, x: -50 }, animate: { opacity: 1, x: 0 } } }; const StaggeredList = () => { return ( <motion.div initial="initial" animate="animate" variants={variants.container} > {new Array(5).fill("").map(() => { return <Card />; })} </motion.div> ); }; const Card = () => ( <motion.div variants={variants.card} > Hello World! </motion.div> ); With this, our animation is complete and sleek staggered list ready! Dragging Dragging is a feature that can be daunting to implement in an app. Thankfully, Framer Motion makes it a lot easier to implement its logic because of its declarative nature. In this post, I will give a simple, general introduction to it. However, in a future tutorial, I may explain with more details about how to create something more complex like a slide to delete. Making an element draggable is extremely simple: add a drag prop to a motion component. Take for example the following: import { motion } from "framer-motion"; <motion.div drag> Hello World! </motion.div> Adding the drag prop will make it draggable in the x-axis and y-axis. It should be noted that you can restrict the movement to a single axis by providing the desired axis to drag. There is a problem with just setting the drag property. It is not bound to any area or container so it can move outside the screen like this: To set constraints we give the dragContraints an object with our desired constraints for every direction: top, left, right, and bottom. Take for example: import { motion } from "framer-motion"; <motion.div drag dragConstraints={{ top: -50, left: -50, right: 50, bottom: 50 }} > Hello World! </motion.div> These constraints allow the element to move 50 pixels maximum in any direction. If we try to drag it, for example, 51 pixels to the top, it will be stopped and bounced. Like this: It is as there is an invisible wall with the form of a square that won't allow the component to move further. Layout Property The layout prop is a powerful feature in Framer Motion. It allows for components to animate automatically between layouts. It will detect changes to the style of an element and animate it. This has a myriad of use cases: reordering of lists, creating switches, and many more. Let's use this immediately! We'll build a switch. First, let's create our initial markup import { motion } from "framer-motion"; const Switch = () => { return ( <div className={`flex w-24 p-1 bg-gray-400 bg-opacity-50 rounded-full cursor-pointer`} onClick={toggleSwitch} > {/* Switch knob */} <motion.div className="w-6 h-6 p-6 bg-white rounded-full shadow-md" layout ></motion.div> </div> ); }; Now, let's add our logic: import { motion } from "framer-motion"; const Switch = () => { const [isOn, setIsOn] = React.useState(false); const toggleSwitch = () => setIsOn(!isOn); return ( <div onClick={toggleSwitch}> {/* Switch knob */} <motion.div layout ></motion.div> </div> ); }; You may have noticed that only our knob has the layout prop. This prop is required on only the elements that we wish to be animated. We want the knob to move from one side to the other. We could achieve this by changing the container flex justification. When the switch is on then the layout will have justify-content: flex-end. Framer Motion will notice the knob's change of position and will animate its position accordingly. Let's add this to our code: ></motion.div> </div> ); }; I added some other styles so it can resemble the look of a switch. Anyway, here is the result: Great! It is amazing how Framer Motion can do this automatically without having to deal with extra controls. Anyhow, it looks a little bland compared to what we are used to seeing on apps like Settings. We can fix this pretty quickly by adding a transition prop. transition={{ type: "spring", stiffness: 500, damping: 30, }} ></motion.div> </div> ); }; We define a spring-type animation because we want a bouncy feel. The stiffness defines how sudden the movement of the knob will look. And, damping defines the strength of the opposing force similar to friction. This means how fast it will stop moving. These together create the following effect: Now our switch looks more alive! Conclusion Creating animations can be daunting, especially when many libraries have complex jargon. Thankfully, Framer Motion allows developers to create seamless animations with its declarative an intuitive API. This post was meant as an introduction to the fundamentals of Framer Motion. In future posts, I will create complex animations like swipe to expand and delete, drawers, shared layout, and many more. Please let me know in the comments if you have any suggestions as to what you want to see animated! It looks awesome! I need to test this out! Thanks for your post :D
https://practicaldev-herokuapp-com.global.ssl.fastly.net/joserfelix/getting-started-with-react-animations-308a
CC-MAIN-2021-04
en
refinedweb
C++ Find Minimum Element in a Rotated Sorted Vector Program Hello Everyone! In this tutorial, we will demonstrate the logic of Finding the Minimum Element in a Rotated Sorted Vector, in the C++ programming language. What is a Rotated Sorted Vector? A Rotated Sorted Vector is a sorted vector rotated at some pivot element unknown to you beforehand. Example: [4,5,6,7,0,1,2] is one of the rotated sorted vector for the sorted vector [0,1,2,4,5,6,7]. For a better understanding of its implementation, refer to the well-commented CPP code given below. Code: #include <iostream> #include <bits/stdc++.h> using namespace std; int findMin(vector<int> &m) { int i; int n = m.size(); for (i = 0; i < n; i++) { if (i == 0) { if (m[i] < m[n - 1] && m[i] < m[1]) break; } else { if (m[i] < m[i - 1] && m[i] < m[(i + 1) % n]) break; } } return m[i % n]; } int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to find the Minimum element in a rotated Sorted Vector, in CPP ===== \n\n\n"; cout << " ===== Logic: The minimum element will have larger number on both right and left of it. ===== \n\n\n"; //initializing vector with the following elements vector<int> v = {4, 5, 6, 7, 1, 3, 2}; int n = v.size(); int mini = 0; cout << "The elements of the given vector is : "; for (int i = 0; i < n; i++) { cout << v[i] << " "; } mini = findMin(v); cout << "\n\nThe Minimum element in the given vector is: " << mini; cout << "\n\n\n"; return 0; } Output: We hope that this post helped you develop a better understanding of the concept of finding a minimum element in the rotated sorted vector and its implementation in CPP. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-find-minimum-element-in-a-rotated-sorted-vector-program
CC-MAIN-2021-04
en
refinedweb
There are a number of important changes between ember-cli-typescript v1 and v2, which mean the upgrade process is straightforward but specific: Update ember-cli-babel. Fix any problems introduced during the upgrade. Update ember-decorators. Fix any problems introduced during the upgrade. Update ember-cli-typescript. Follow the detailed upgrade guide below to fix discrepancies between Babel and TypeScript's compiled output. If you deviate from this order, you are likely to have a much more difficult time upgrading! ember-cli-typescript requires ember-cli-babel at version 7.1.0 or above, which requires ember-cli 2.13 or above. It also requires @babel/core 7.2.0 or higher. The recommended approach here is to deduplicate existing installations of the dependency, remove and reinstall ember-cli-babel to make sure that all its transitive dependencies are updated to the latest possible, and then to deduplicate again. If using yarn: npx yarn-deduplicateyarn remove ember-cli-babelyarn add --dev ember-cli-babelnpx yarn-deduplicate If using npm: npm dedupenpm uninstall ember-cli-babelnpm install --save-dev ember-cli-babelnpm dedupe Note: If you are also using ember-decorators—and specifically the babel-transform that gets added with it—you will need update @ember-decorators/babel-transforms as well (anything over 3.1.0 should work): ember install [email protected]^3.1.0 @ember-decorators/[email protected]^3.1.0 If you're on a version of Ember before 3.10, follow the same process of deduplication, reinstallation, and re-deduplication as described for ember-cli-babel above for ember-decorators. This will get you the latest version of ember-decorators and, importantly, its @ember-decorators/babel-transforms dependency. Now you can simply ember install the dependency like normal: ember install [email protected] Note: To work properly, starting from v2, ember-cli-typescript must be declared as a dependency, not a devDependency for addons. With ember install this migration would be automatically handled for you. If you choose to make the upgrade manually with yarn or npm, here are the steps you need to follow: Remove ember-cli-typescript from your devDependencies. With yarn: yarn remove ember-cli-typescript With npm: npm uninstall ember-cli-typescript Install the latest of ember-cli-typescript as a dependency: With yarn: yarn add [email protected] With npm: npm install --save [email protected] Run ember generate: ember generate ember-cli-typescript Since we now integrate in a more traditional way into Ember CLI's build pipeline, there are two changes required for addons using TypeScript. Addons can no longer use .ts in app, because an addon's app directory gets merged with and uses the host's (i.e. the other addon or app's) preprocessors, and we cannot guarantee the host has TS support. Note that .ts will continue to work for in-repo addons because the app build works with the host's (i.e. the app's, not the addon's) preprocessors. Similarly, apps must use .js to override addon defaults in app, since the different file extension means apps no long consistently "win" over addon versions (a limitation of how Babel + app merging interact). ember-cli-typescript v2 uses Babel to compile your code, and the TypeScript compiler only to check your code. This makes for much faster builds, and eliminates the differences between Babel and TypeScript in the build output that could cause problems in v1. However, because of those differences, you’ll need to make a few changes in the process of upgrading. Any place where a type annotation overrides a getter Fields like element, disabled, etc. as annotated defined on a subclass of Component and (correctly) not initialized to anything, e.g.: import Component from '@ember/component';export default class Person extends Component {element!: HTMLImageElement;} This breaks because element is a getter on Component. This declaration then shadows the getter declaration on the base class and stomps it to undefined (effectively Object.defineProperty(this, 'element', void 0). (It would be nice to use declare here, but that doesn't work: you cannot use declare with a getter in a concrete subclass.) Two solutions: Annotate locally (slightly more annoying, but less likely to troll you): class Image extends Component {useElement() {let element = this.element as HTMLImageElement;console.log(element.src);}} Use a local getter: class Image extends Component {// We do this because...get _element(): HTMLImageElement {return this.element as HTMLImageElement;}useElement() {console.log(this._element.src);}} Notably, this is not a problem for Glimmer components, so migrating to Octane will also help! const enum is not supported at all. You will need to replace all uses of const enum with simply enum or constants. Using ES5 getters or settings with this type annotations is not supported through at least Babel 7.3. However, they should also be unnecessary with ES6 classes, so you can simply remove the this type annotation. Trailing commas after rest function parameters ( function foo(...bar[],) {}) are disallowed by the ECMAScript spec, so Babel also disallows them. Re-exports of types have to be disambiguated to be types, rather than values. Neither of these will work: export { FooType } from 'foo'; import { FooType } from 'foo';export { FooType }; In both cases, Babel attempts to emit a value export, not just a type export, and fails because there is no actual value to emit. You can do this instead as a workaround: import * as Foo from 'foo';export type FooType = Foo.FooType;
https://docs.ember-cli-typescript.com/upgrade-notes
CC-MAIN-2021-04
en
refinedweb
DEBSOURCES Skip Quicknav sources / libxml-twig-perl / 1:3.521650 CHANGES 3.52 - 2016-11-23 - minor maintenance release - fixed: the previous fix was buggy... 3.51 - 2016-11-23 - minor maintenance release - fixed: failing tests when XML::XPathEngine and XML::XPath not available 3.50 - 2016-11-22 - minor maintenance release - added: the no_xxe option to XML::Twig::new, which causes the parse to fail if external entities are used (to prevent malicious XML to access the filesystem). See RT#118097 - fixed: warning (and soon error) due to unescaped literal left braces in regular expressions in the code generating Twig.pm reported by trwyant - fixed: (partial fix) implement getNamespaces in XML::Twig::XPath::Elt the expression doesn't crash the code, but doesn't return anything interesting (yet) reported by Nathan Glenn - fixed: various spelling mistakes thanks to James McCoy for the patch - git repo cleanup, thanks to mjg17 3.49 - 2015-04-12 - minor maintenance release - added: the DTD_base option to XML::Twig new, that forces XML::Twig to look for the DTD in a given directory thanks to Arun lakhana for the idea - - COMPATIBILITY WARNING fixed: bug RT # spotted[@def="1"]/title' - COMPATIBILITY WARNING Up to version 3.26, you could change the attribute of a parent of a node on which you had a handler, and be able to trigger a handler on that parent node based on the new attribute value: XML::Twig->new( twig_handlers => { 'sect/title' => sub { $_->parent->set_att( has_title => 1)}, 'sect[@has_title="1"]'=> sub { ... }, # called for any sect that has } # a title ); This won't work now. The trigger expression ('sect[@has_title="1"]') is evaluated strictly against the input XML. This is more logical and consistent (if you changed the element name, the new name was never used in the evaluation of the trigger). The only exception to that rule is if you use "private attributes": attributes which name starts with a '#'. By definition this in an invalid XML name, so it can't be in the input, and has to have been created . In that case the code that evaluates the trigger looks at the attribute in the element in the tree in memory (if it exists). So in the example above, if you replace 'has_title' by '#has_title', everything will work fine. Note that private attributes are not output when using the print/sprint/xml_string... methods. - fixed: xml_pp so it does not leave a tempfile and a broken original file all when the original file is not well-formed. - added: the nparse_pp method that does an nparse with pretty_print set to 'indented', nparse_e that sets error_context, and nparse_ppe that does both - added: XML::Twig::Elt tag_to_span and tag_to_div methods (turn an element into a span/div and set its class to the old tag name) - added: the quote option for XML::Twig new, which sets the output quote character for attributes ('single' or 'double') - added: the text_only and xml_text_only methods that return the text of the element, but not of the sub-elements. - added: outer_xml method (synonym for sprint) - fixed: bug where entity names were not matched properly (RT #22854, spotted by Bob Faist) - fixed: bug on some DOCTYPE config with twig_print_outside_roots - fixed: bug in set_keep_encoding (the method, not the option). - fixed: bug in simplify: the code attempted to replace variables in attribute values even if no option required it, spotted by Klaus Rush - improved: clean-up and fixed bugs in ignore: the method can now be called from a regular handler (it always could but the docs did not say so, thanks to kudra for noticing this). It can also be called to ignore a parent of the current element. There were bugs there, and the tree was not built properly - added: error message when an XPath query with a leading / is used on a node that does not belong to a whole twig (because it's been cut or because the twig itself went out of scope) - improved: when parsing HTML with error_context set, the HTML is indented, in order to give better error 3.26 - 2006-07-01 - added: argument to -i in the Makefile to prevent problem in win32 - added: XML::Twig::Elt former_next_sibling, former_prev_sibling and former_parent methods - squashed a memory leak when parsing html (forgot to call delete on the HTML::Tree object) - fixed: bug that caused XML::Twig to hang if there was a syntax error in a predicate (RT#19499, reported by Dan Dascalescu) -improved: made start_tag and end_tag more consistent: they now both return the empty string for comments, PIs... (reported by Dan Dascalescu) - added: parsefile_inplace and parsefile_html_inplace methods (thanks to GrandFather on perlmonks) - added: support to add css stylesheet in the add_stylesheet method (thanks to Georgi Sotirov) - patched tests to work on Win32 - added: set_inner_xml inner_xml and set_inner_html methods 3.25 - 2006-05-10 - patched to work with perl 5.005! - fixed: a bug in xml_pp when pretty printing a file in place in a different file system 3.24 - 2006-05-09 - added: loading the text of entities stored in separate files (using SYSTEM) when the (awfully named!) expand_external_ents option is used. Thanks to jhx for spotting this. - changed: set_cdata, set_pi and set_comment so that if you call them on an element of the wrong kind, everything works as expected, instead of swallowing silently the data. Bug spotted by cmccutcheon - fixed: a whole bunch of things to make the module run and the tests pass on VMS, thanks to Peter (Stig) Edwards who reported bug RT #18655 and provided a patch. - fixed: bug on get_xpath( '/root[1]') expressions, RT #18789 spotted by memfrob. - added: the add_stylesheet method, that... adds a stylesheet (xsl type is supported, let me know if other types are needed) to a document. - improved: allowed pasting PI/Comment elements before or after the root of a document (see discussion at). Thanks to rogue90 for noticing the problem, and to Tanktalus for finding the best way to solve it. - added: aliased unwrap to erase (ie added the unwrap method to XML::Twig::Elt, identical to the existing erase) suggested by Chris Burbridge. - fixed: bug RT #17522: flushing twice at the end of the the parse would output the last fragment twice. Spotted by Harco de Hilster. - fixed: bug RT #17500: parsing a pipe when using the UTF8 perlIO layer (through PERL_UNICODE or -C) now raises an error, found by Nikolaus Rath. cwimproved: made the tests pass when the UTF8 perlIO layer is used. At this point potential problems when parsing non-UTF8 XML in this configuration are not trapped. 3.23 - 2006-01-23 - added: autoflush: there is no more need for the last $twig->flush after the parsing, it is done automatically at the end of the parsing, with the same arguments as the first flush on the twig. This can be turned of by setting $twig->{twig_autoflush} to 0. WARNING: if you finished the output with a direct print instead of a flush, then this change will cause a bug. Hopefully this should not be the case and is easily fixable. - fixed: bug RT #17145 where get_xpath('//root/elt[1]/child') would produce a fatal error if there were no elt element under root. Spotted by Dan Dascalescu. - fixed: bug RT #17064 (comments and PIs after the root element were not properly processed), spotted by Dan Dascalescu. - fixed: bug RT #17044: the SYSTEM value was not output in UpdateDTD mode, thanks to Michal Lewandowski for pointing this out. - changed: the way empty tags are expanded with the 'html' style: only tags that are allowed to be empty in XHTML are output as '<tag />', thanks to Tom Rathborne for proding me to look into this. - added: a 'wrapped' pretty_print option, that is a bit dodgy I think but that might please some. - fixed: bug RT #16540 (tags with specific names (like 'level'), tripped XML::Twig, spotted by Graham - added: comparison with XML::LibXML in the SEE ALSO section (and in the FAQ), following a question from surf on c.l.p.m - added: XML::Twig now rejects string/regexp condition in twig_roots - added: better error checking in xml_grep - fixed: string/regexp condition in xml_grep - added: support for ! @att (or not @att) in get_xpath - added: support for several predicates in get_xpath (not nested predicates though). - fixed: bug RT #15671 (wrong condition interpretation for attribute value 0) - added: XML::Twig print_to_file method - added: XML::Twig::Elt methods: following_elt, following_elts, preceding_elt, preceding_elts (needed to support the corresponding axis in get_xpath) 3.22 - 2005-10-14 - added: the XML::Twig xparse method, which parses whatever is thrown at it (filehandle, string, HTML file, HTML URL, URL or file). - added: the XML::Twig nparse method, which creates a twig and then calls xparse on the last parameter. - added: the parse_html and parsefile_html methods, which parse HTML strings (or fh) and files respectively, with the help of HTML::TreeBuilder. the implementation may still change. Note that at the moment there seems to be encoding problems with it (if the input is not UTF8). - added: info to t/zz_dump_config.t - fixed: a bug that caused subs_text to leave empty #PCDATA elements if the regexp matched at the beginning or at the end of the text of an element. - fixed: RT #15014: in a few methods objects were created as XML::Twig::Elt, instead of in the class?!F of the calling object. - fixed: RT #14959: problem with wrap_children when an attribute of one of the child element includes a '>' - improved: the docs for wrap_children - added: a better error message when re-using an existing twig during the parse - fixed: (partially) a bug with windows line-endings in CDATA sections with keep_encoding set (RT #14815) - added: Test::Pod::Coverage test to please the kwalitee police ;--) 3.21 - 2005-08-12 - fixed: a test that failed if Tie::IxHash was not available - added: link to Atom feed for the CPAN testers results at 3.20 - 2005-08-11 - fixed: the pod (which caused the tests to fail) 3.19 - 2005-08-10 - fixed: the fix to RT # 14008, this one should be ok restructured tests - added: the _dump method (probably not finished) 3.18 - 2005-08-08 - added: a fix to deal with a bug in XML::Parser in the original_string method when used in CDATA sections longer than 1024 chars (RT # 14008) thanks to Dan Dascalescu for spotting the bug and providing a test case. - added: better error diagnostics when the wrong arguments are used in paste - fixed: a bug in subs_text when the text of an element included \n (RT #13665) spotted by Dan Dascalescu - improved: cleaned up the behaviour of erase when the element being erased has extra_data (comments or pis) attached - fixed: a bug in subs_text that sometimes messed up text after the matching text - fixed: the erase/group_tags option of simplify to make it exactly similar to XML::Simple's - fixed: a bug that caused XML::Twig to crash when ignore was used with twig_roots (RT #13382) spotted by Larry Siden - fixed: bug in xml_split with default entities (they ended up being doubly escaped) - fixed: various bugs when dealing with ids (changing existing ids, setting the attribute directly...) - improved mark and split, both methods now accepts several tags/ as arguments, so you can write for example: $elt->mark( qr/^(\w+): (.*)$/, 'dt', 'dd'); - added: XML::Twig::Elt children_trimmed_text method, patch sent by ambrus (RT #12510) - changed: children_text and children_trimmed_text to have them return the entire text in scalar context - fixed: bug that caused XML::Twig not to play nice with XML::Xerces (due to improper import of UNIVERSAL::isa) spotted and patched by Colin Robertson. - changed: most references to 'gi' in the docs, replaced them by tag. I guess Robin Berjon's relentless teasing is to be credited with this one. - added: tag_regexp condition on handlers (a regexp instead of a regular condition will trigger the handler if the tag matches), suggested by Franck Porcher, implementation helped by a few Perl Monks (). - fixed: typos in xml_split (RT #11911 and #11911), reported by Alexey Tourbin - added: tests for xml_split and xml_merge and fixed a few bugs in the process - added: the -i option to xml_split and xml_merge, that use XInclude instead of PIs (preliminary support, the XInclude namespace is not declared for example). - added the XML::Twig and XML::Twig::Elt trim method that trims an element in-place -added the XML::Twig last_elt method and the XML::Twig::Elt last_descendant method - added: more tests 3.17 - 2005-03-16 - improved: documentation, mostly to point better to the resources at -fixed: a few tests that would fail under perl 5.6.* and Solaris (t/test_safe_encode.t and t/test_bug_3.15.t), see RT bug # 11844, thanks to Sven Neuhaus - changed: the licensing terms in the README to match the ones in the main module (same as Perl), see RT bug #11725 - added: a test on XML::SAX::Writer version number to avoid failing tests with old versions (<0.39) - improved: xml_split 3.16 - 2005-02-11 - added: the xml_split/xml_merge tools - fixed: PI handler behaviour when used in twig_roots mode - fixed: bug that prevented the DTD to be output when update_DTD mode is on, no DTD is present but entities have been created - added: level(<n>) trigger for handlers - fixed: bug that prevented the output_filter to be called when printing an element. Spotted thanks to Louis Strous. - fixed: bug in the nsgmls pretty printer that output invalid XML (an extra \n was added in the end tag) found by Lee Goddard - fixed: test 284 in test_additional to make it pass in RedHat's version of perl 5.8.0, thanks to rdhayes for debugging and fixing that test. - improved: first shot at getting Pis and comments back in the proper place, even in 'keep' mode. At the moment using set_pcdata (or set_cdata) removes all embedded comments/pis - fixed: a bug with pi's in keep mode (pi's would not be copied if they were within an element) found by Pascal Sternis - added: a fix to get rid of spurious warnings, sent by Anthony Persaud - added: the remove_cdata option to the XML::Twig new method, that will output CDATA sections as regular (escaped) PCDATA - added: the index option to the XML::Twig new method, and the associated XML::Twig index method, which generates a list of element matching a condition during parsing - added: the XML::Twig::Elt first_descendant method - fixed: bug with the keep_encoding option where attributes were not parsed when the element name was followed by more than one space (spotted by Gerald Sedrati-Dinet), see - fixed: a bug where whitespace at the beginning of an element could be dropped (if followed by an element before any other character). Now whitespace is dropped only if it includes a \n - added: feature: when load_DTD is used, default attributes are now filled - fixed: bug on xmlns in path expression trigger (would not replace prefixes in path expressions), spotted by amonroy on perlmonks, see - optimized: XML::Twig text, thanks to Nick Lassonde for the patch - fixed: bug that generated an empty line before some - fixed: tests to check XML::Filter::BufferText version (1.00 has a bug in the CDATA handling that makes XML::Twig tests fail). - added: new options --nowrap and --exclude (-v) to xml_grep - fixed: warning in tests under 5.8.0 (spotted by Ed Avis) - improved: skipped HTML::Entities tests in 5.8.0 (make test for this module seem to fail on my system, it might be the same elsewhere) - fixed: bug RT #6067 (problems with non-standard versions of Scalar::Utils which do not include weaken) - fixed: bug RT #6092 (error when using safe output filter) - fixed: bug when using map_xmlns, tags in default namespace were not output 3.15 - 2004-04-05 - fixed: tests now pass on more systems (thanks to Ed Avis for his testing) - added: normalize_space option for simplify (suggestion of Lambert Lum) - improved: removed usage of $& - improved: the doc for paste, as it was a bit short (suggestion of Richard Jolly) 3.14 - 2004-03-17 - improved: namespace processing , it should work fine now, as long as twig_roots is not used. - COMPATIBILITY WARNING: Potentially uncompatible change: the behaviour of simplify has been changed to mimic as exactly as possible XML::Simple's XMLin - improved: the pod to cover the entire API - improved: tests, now pass with perl 5.005_04-RC1 (fail with 5.005 reported by David Claughton), added more tests and a config summary at the end of the tests - added: methods on the class attribute, convenient for dealing with XHTML or preparing display with CSS: class set_class add_to_class att_to_class add_att_to_class move_att_to_class tag_to_class add_tag_to_class set_tag_class in_class navigation functions can use '.<class>' expressions - fixed: (yet another!) bug in the way DTDs were output - fixed: bug for pi => 'drop' option - changed: the names of lots on internal (undocumented) methods, prefixed them with _ 3.13 - 2004-03-16 - maintenance release to get the tests to pass on various platforms - improved: the README file - fixed: problem with encoding conversions (using safe_encode and safe_encode_hex) under perl 5.8.0, see RT ticket #5111 - fixed: tests to pass when trying to use an unsupported iconv filter 3.12 - 2004-01-29 - new features and greatly increased test coverage - added: lots of tests (>900), thanks to David Rigaudiere, Forrest Cahoon, Sebastien Aperghis-Tramoni, Henrik Tougaard and Sam Tregar for testing this release on various OSs, Perl, XML::Parser and expat versions. - added: XML::Twig::XPath that uses XML::XPath as the XPath engine for findnodes, findnodes_as_string, findvalue, exists, find and matches. Just use XML::Twig::XPath instead of use XML::Twig; (see the tests in t/xmlxpath_*). - added: special case to output some HTML tags ('script' to start with) as not empty. - fixed: XML::Twig::Elt->new now properly flags empty elements (spotted by Dave Roe) - added: XML::Twig::Elt contains_a_single method - added: #ENT twig_handlers (not necessarily complete, so not yet documented, needs more tests) - added: doc for XML::Twig and XML::Twig::Elt subs_text methods tags starting with # are now "invisible" (they are not output), useful for example for pretty_printing - added: new options --wrap '' and --date to xml_grep improved XPath support (added [nb] support) - added: xpath method, which generates a unique XPath for an element - added: has_child and has_children as synonyms of first_child - added: XML::Twig::set_id_seed to control how generated id's are created - improved: when using ignore on an element, end_tag_handlers are now tested at the end of the element (so you can for example get the byte offset in the document), suggestion of Philippe Verdret - added: XML::Twig::Elt change_att_name - fixed: XML::Twig::Elt new now properly works when called as an object (and not a class) method - fixed: namespace processing somewhat - fixed: SAX output methods - fixed: bug when keep_atts_order on and using set_att on an element with no existing attribute (spotted by scharloi) - COMPATIBILITY WARNING: WARNING - potentially incompatible changes - when using finish_print, the document used to be flushed. This is no longer the case, you will have to do it before calling finish_print. This way you have the choice of doing it or not. - improved: removed XML::Twig::Elt::unescape function (was no longer used) 3.11 - 2003-08-28 - added: --text_only option to xml_grep (outputs the text of the result, without tags) - fixed: bug where "Comments [was] always dropped after a twig object set 'comments' to 'drop'" (RT#3711), bug report and first patch by Simon Flack - added: option "keep_atts_order" that keeps the original attribute order in the output. This option needs the Tie::IxHash module to work. 3.10 - 2003-06-09 - added: xml_pp xml_grep and xml_spellcheck to the distribution - improved: the print method now calls 'print $elt->sprint' instead of printing content as it converts them to text, in order to reduce the number of calls to Perl's print (which should increase performance) - changed: XML::Twig::Elt erase to allow erasing the root element if it has only 1 child - added: findvalue method to XML::Twig and XML::Twig::Elt - added: aliased findnodes to get_xpath in XML::Twig and XML::Twig::Elt - added: the elt_class option to XML::Twig::new - added: the do_not_chain_handlers option to XML::Twig::new - added: the XML::Twig::Elt is_first_child and is_last_child methods - improved: set_gi,set_text, prefix, suffix, set_att, set_atts, del_atts, del_att now return the element for easier chains - fixed: bug in pretty printing comments before elements (RT #2315) - added: the XML::Twig::Elt children_copy method which returns a list of elements that are copies of the children of the element - fixed: a bug in wrap_in when the element wrapped is not attached to a tree - fixed: bug with get_xpath: regexp modifiers were not taken into account spotted by Eric Safern (RT #2284) - fixed: bug in methods inherited from XML::Parser::Expat (arguments were not properly passed) - improved: installed local empty SIG handlers to trap error messages triggered by require for optional modules, so that user signal handlers would not have to deal with them (suggestion from Philippe Verdret) - fixed: bug in the navigation XPath engine: text() was used instead of string(). Both are now allowed. - added: XML::Twig::Elt sort_children, sort_children_on_value, sort_children_on_att and sort_children_on_field methods that sort the children of an element in place - added:XML::Twig::Elt field_to_att and att_to_field methods - fixed:a memory leak due to ids not being weak references - added: the XML::Twig::Elt wrap_children method that wraps children of an element that satisfy a regexp in a new element - added: the XML::Twig::Elt add_id method that adds an id to an element - added: the XML::Twig::Elt strip_add method that deletes an attribute from an element and its descendants - COMPATIBILITY WARNING fixed:a quasi-bug in set_att where the hash passed in reference was used directly, which makes it a problem when the same reference is passed several times: all the elements share the same attributes. This is a potentially incompatible change for code that relied on this feature. Please report problems to the author. - fixed: bug in set_id - fixed: bug spotted by Bill Gunter: allowed _ as the initial character for XML names. Also now allow ':' as the first element - added: the simplify methods, which load a twig into an XML::Simple like data structure - fixed: bug in get_type and is_elt, spotted and fixed by Paul Stodghill - added: the XML::Twig::Elt ancestors_or_self method - fixed: bug when doc root is also a twig_root (twig was not built) - improved: the README (fleshed out examples, added OS X to the list of tested platforms) - fixed: bug when using the no_dtd_output option - added: doc for the XML::Twig::Elt children_count method - added: the XML::Twig::Elt children_text method - improved: updated the doc so it can be properly formatted by my custom pod2html, the generated doc (with a bigger ToC and better links) is available from the XML::Twig page at 3.09 - 2002-11-10 - added: XML::Twig::Elt xml_text method - fixed: several bugs in the split method under 5.8.0 when matching a utf8 character (thanks to Dominic Mitchell who spotted them) - improved: cleaned-up the pod (still in progress) - added: the XML::Twig::Elt pos method that gives the position of an element in its parent's child list - fixed: re-introduced parseurl (thanks to Denis Kolokol for spotting its absence in this version) - fixed: ent_tag_handlers were not called on the root (thanks to Philippe Verdret - improved: #PI (also declared as '?') and #COMMENT handler support - added: check on reference type (must be XML::Twig::Elt) in XML::Twig::Elt::paste (patch by Forrest Cahoon) 3.08 - 2002-09-17 - fixed: the previous fix wasn't enough :--( 3.07 - 2002-09-17 - fixed:the way weaken is imported from Scalar::Util 3.06 - 2002-09-17 - added: XML::Twig::Elt trimmed_text and related methods (trimmed_field, first_child_trimmed_text, last_child_trimmed_text...) - added: XML::Twig::Elt replace_with method - added: XML::Twig::Elt cut_children method - added: XML::Twig contains_only method - added: *[att=~ /regexp/] condition type (suggested by Nikola Janceski) - fixed: bug in the way handlers for gi, path and subpath were chained (Thanks to Tommy Wareing) - fixed: bug where entities caused an error on other handlers (Thanks to Tommy Wareing) - fixed: bug with string(sub_elt)=~ /regexp/ (thanks to Tommy Wareing) - fixed: bug with output_filter used with expand_external_entities (thanks to Tommy Wareing) - fixed: (yet another!) bug with whitespace handling (whitespace, then an entity made the whitespace move after the entity) (spotted by the usual Tommy Wareing) - added: an error message when pasting on an undef reference (suggestion of Tommy Wareing) - fixed: bug in in_context (found by Tommy Wareing) - fixed: bug when loading the DTD (local undef $/ did not stay local, bug found and patch sent by Steve Pomeroy and Henry Cipolla) - fixed: bug in setting output filter - fixed: bug in using a filehandle with twig_print_outside_roots - added: safe_encode_hex filter - fixed: bug in set_indent, $INDENT not set properly (thanks to Eric Jain) - fixed: dependencies (no check with 5.8.0, added Scalar::Util as a possible source for weaken) - added: no_prolog option to XML:Twig::new - improved: tested build on Windows (thanks to Cory Trese and Josh Hawkins) - changed:in 3.05 - added: _ALPHA_ SAX export methods: XML::Twig toSAX1, toSAX2, flush_toSAX1, flush_toSAX2 XML::Twig::Elt toSAX1, toSAX2 The following gotchas apply: + these methods work only for documents that are completely loaded by XML::Twig (ie if you use twig_roots the data outside of the roots will not be output as SAX). + SAX1 support is a bit dodgy: the encoding is not preserved (it is always set to 'UTF-8'), + locator is not supported (and probably will not, what's the location of a newly created element?) Also when exporting SAX you should consider setting Twig to a mode where all aspects of the XML are treated as nodes by XML::Twig, by setting the following options when you create the twig: - improved: twig_print_outside_roots now supports a file handle ref as argument: the untouched part of the tree will be output to the filehandle: - added: the 'indented_c' style that gives a slightly more compact pretty print than 'indented': the end tags are on the same line as the preceding text (suggestion of Hugh Myers) - added: option in get_xpath (aka find_nodes) to apply the query to a list of elements - added: processing of conditions on the current node in get_xpath: my @result= get_xpath( q{.[@att="val"]}); This is of course mostly useful with the previous option. The idea stemmed from a post from Liam Quin to the perl-xml list - added: XML::Twig xml_version, set_xml_version, standalone, set_standalone methods on the XML declaration - fixed: bug in change_gi (which simply did not work at all), found by Ron Hayden. - fixed: bug in space handling with CDATA (spaces before the CDATA section were moved to within the section), comments and PI's - fixed: bug in parse_url (exit was not called at the end of the child), found by David Kulp - improved: cleanup a bit the code that parses xpath expressions (still some work to be done on this though), fixed a bug with last, found by Roel de Cock - fixed: the SYNOPSIS (parsefile is used to parse files, spotted by e.sammer) - fixed: bug in pretty printing (reported by Zhu Zhou) - fixed: bugin the install: the Makefile now uses the same perl used to perl Makefile.PL to run speedup and check_optional_modules (reported by Ralf Santos) - fixed: bugs in pretty printing when using flush, trying to figure out as well as possible if an element contains other elements or text (there is still a gotcha, see the BUGS section in the docs) - fixed: bug that caused the XML declaration and the DTD not to be reset between parses - improved: the conversion functions (errors are now reported when the function is created and not when it is first used) - added: the output_encoding option to XML::Twig->new, which allows specifying an encoding for the output: the conversion filter is created using Encode (perl 5.8.0) Text::Iconv or Unicode::* The XML declaration is also updated - added: #CDATA and #ENT can now be used in handler expressions - added: XML::Twig::Elt remove_cdata method, which turns CDATA sections into regular PCDATA elements - improved: set_asis can now be used to output CDATA sections un-escaped (and without the CDATA section markers) 3.04 - 2002-04-01 - fixed: handlers for XML::Parser 2.27 so the module can pass the tests 3.03 - 2002-03-26 - fixed: bugs in entity handling in twig_roots mode - added: the ignore_elts option, to skip completely elements - improved: enhanced the XPath-like syntax in navigation and get_xpath methods: added operators (>, < ...) - fixed: [RT 168]: setTwigHandler failed when no handler was already set (thanks to Jerry) - improved: turned %valid_option into a package global so AnyData can access it - fixed: bug in sprint that prevented it from working with filters - fixed: bug in erase when erasing an empty element that was the last child of its parent ([RT390], thanks to Julian Arnold) - fixed: copy now correctly copies the asis status of elements - fixed:typos on the docs (thanks to Shlomo Yona) - added: tests (for erase and entities in twig_roots mode) 3.02 - 2002-01-16 - fixed: tweaked speedup to replace constructs that did not work in perl 5.005003 3.01 - 2002-01-09 - fixed: the directory name in the tar file 3.00 - 2002-01-09 - COMPATIBILITY WARNING: THIS CHANGE IS NOT BACKWARD COMPATIBLE But it is The Right Thing To Do In normal mode (when KeepEncoding is not used) the XML data is now stored as parsed by XML::Parser, ie the base entities are expanded. The "print" methods (print, sprint and flush, plus the new xml_string, pcdata_xml_string and att_xml_string) return the data in XML-escaped form: & and < are escaped in PCDATA and &, < and the quote (" by default) are turned to & < and " (or ' if the quote is '). The "text" methods (text, att and pcdata) return the stored text as is. So if you want to output XML you should use the "print" methods and if you want to output text you should use the "text" methods. Note that this breaks the trick consisting in adding tags to the content of an element: $elt->prefix( "<b>") no longer adds a <b> tag before an element. $elt->print will now output "<b>...". (but you can still use it by marking those elements as 'asis'). It also fixes the annoying ' thingie that used to replace ' in the data. When the KeepEncoding option is used this is not true, the data is stored asis, base entities are kept un-escaped. Note that KeepEncoding is a global setting, if you use several twigs, some with KeepEncoding and some without then you will have to manually set the option using the set_keep_encoding method, otherwise the last XML::Twig::new call will have set it In addition when the KeepEncoding option is used the start tag is parsed using a custom function parse_start_tag, which works only for 1-byte encodings (it is regexp-based). This method can be overridden using the ParseStartTag (or parse_start_tag) option when creating the twig. This function takes the original string as input and returns the gi and the attributes (in a hash). If you write a function that works for multi-byte encodings I would very much appreciate if you could send it back to me so I can add it to the module, so other users can benefit from it. An additional option ExpansExternalEnts will expand external entity references to their text (in the output, the text stored is &ent;). - added: when handlers (twig_handlers or start_tag_handlers) are called $_ is set to the element node, so quick hacks look better: my $t= new XML::Twig( twig_handlers => { elt => sub { print $_->att( 'id'), ": ", $_->text, "\n"; } } ); - added: XML::Twig dispose method which properly reclaims all the memory used by the object (useful if you don't have WeakRef installed) - added: XML::Twig and XML::Twig::Elt ignore methods, which can be called from a start_tag_handlers handler and cause the element (or the current element if called on a twig) to be ignored by the parsing - added: XML::Twig parse_start_tag option that overrides the default function used to parse start tags when KeepEncoding is used - added: XML::Twig::Elt xml_string, pcdata_xml_string and att_xml_string all return an XML-escaped string for an element (including sub-elements and their tags but not the enclosing tags for the element), a #PCDATA element and an attribute - added: XML::Twig::Elt methods tag and set_tag, equivalent respectively to gi and set_gi - added: XML::Twig and XML::Twig::Elt set_keep_encoding methods can be used to set the keep_encoding value if you use several twigs with different keep_encoding options - improved: option names for XML::Twig::new are now checked (a warning is output if the option is not a valid one); - improved: when using pretty_print nice or indented keep_spaces_in is now checked so the elements within an element listed in keep_spaces_in are not indented - added: XML::Twig::Elt insert_new_elt method that does a new and a paste - added: XML::Twig::Elt split_at method splits a #PCDATA element in 2 - added: XML::Twig::Elt split method splits all the text descendants of an element, on a regep, wrapping text captured in brackets in the regexp in a specified element, all elements are returned - added: XML::Twig::Elt mark method is similar to the split method, except that only newly created elements (matched by the regexp) are returned - added: XML::Twig::Elt get_type method returns #ELT for elements and the gi (#PCDATA, #CDATA...) otherwise - added: XML::Twig::Elt is_elt returns the gi if the element is a real element and 0 if it is #PCDATA, #CDATA... - added: XML::Twig::Elt contains_only_text returns 1 if the element contains no "real" element (is_field is another name for it) - added: First implementation of the output_filter option which filters the text before it is output by the print, sprint, flush and text methods (only works for print at the moment, and still under test with various versions of XML::Parser). Standard filters are also available Example: #!/bin/perl -w use strict; use XML::Twig; my $t = new XML::Twig(output_filter => 'latin1'); $t->parse( \*DATA); $t->print; __DATA__ <?xml version="1.0" encoding="ISO-8859-1"?> <doc� att�="valu�">Un homme soup�onn� d'�tre impliqu� dans la mort d'un motard de la police, renvers� </doc�>.
https://sources.debian.org/src/libxml-twig-perl/1:3.52-1/Changes/
CC-MAIN-2021-04
en
refinedweb
Host filtering, restricting the hostnames that your app responds to, is recommended whenever you're running in production for security reasons. In this post, I describe how to add host filtering to an ASP.NET Core application. What is host filtering? When you run an ASP.NET Core application using Kestrel, you can choose the ports it binds to, as I described in a previous post. You have a few options: - Bind to a port on the loopback (localhost) address - Bind to a port on a specific IP address - Bind to a port on any IP address on the machine. Note that we didn't mention a hostname (e.g. example.org) at any point here - and that's because Kestrel doesn't bind to a specific host name, it just listens on a given port. Note that HTTP.sys can be used to bind to a specific hostname. DNS is used to convert the hostname you type in your address bar to an IP address, and typically port 80, or (443 for HTTPS). You can simulate configuring DNS with a provider locally be editing the hosts file on your local machine, as I'll show in the remainder of this section On Linux, you can run sudo nano /etc/hosts to edit the hosts file. On Windows, open an administrative command prompt, and run notepad C:\Windows\System32\drivers\etc\hosts. At the bottom of the file, add entries for site-a.local and site-b.local that point to your local machine: # Other existing configuration # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 site-a.local 127.0.0.1 site-b.local Now create a simple ASP.NET Core app, for example using dotnet new webapi. By default, if you run dotnet run, your application will listen on ports 5000 (HTTP) and 5001 (HTTPS), on all IP addresses. If you navigate to, you'll see the results from the standard default WeatherForecastController. Thanks to your additions to the hosts file, you can now also access the site at site-a.local and site-b.local, for example,: You'll need to click through some SSL warnings to view the example above, as the development SSL is only valid for localhost, not our custom domain. By default, there's no host filtering, so you can access the ASP.NET Core app via localhost, or any other hostname that maps to your IP address, such as our custom site-a.local domain. But why does that matter? Why should you use host filtering? The Microsoft documentation points out that you should use host filtering for a couple of reasons: - When doing URL generation. - When hosting behind a reverse proxy and the Hostheader is forwarded correctly There are several attacks which rely on an apps responding to requests, regardless of the host name: - A DNS rebinding attack, that allows attackers to execute code on your machine if you're running a server on localhost. - Cache poisoning attacks - Password reset hijacking The latter two attacks essentially rely on an application "echoing" the hostname used to access the website when generating URLs. You can easily see this vulnerability in an ASP.NET Core app if you generate an absolute URL, for use in a password reset email for example. As a simple example, consider the controller below: this generates an absolute link to the WeatherForecast action (shown in the previous image): Specifying the protocol means an absolute URL is generated instead of a relative link. There are various other methods that generate absolute links, as well as others on the LinkGenerator. [ApiController] [Route("[controller]")] public class ValuesController : Controller { [HttpGet] public string GetPasswordReset() { return Url.Action("Get", "WeatherForecast", values: null, protocol: "https"); } } Depending on the hostname you access the site with, a different link is generated. By leveraging "forgot your password" functionality, an attacker could send an email from your system to any of your users with a link to a malicious domain under the attacker's control! Hopefully we can all agree that's bad… luckily the fix isn't hard. Enabling host filtering for Kestrel Host filtering is added by default in the ConfigureWebHostDefaults() method, but it's disabled by default. If you're using this method, you can enable the middleware by setting the "AllowedHosts" value in the app's IConfiguration. In the default templates, this value is set to * in appsettings.json, which disables the middleware. To add host filtering, add a semicolon delimited list of hostnames: { "AllowedHosts": "site-a.local;localhost" } You can set the configuration value using any enabled configuration provider, for example using an environment variable. With this value set, you can still access the allowed host names, but all other requests to other hosts will return a 400 response, stating the hostname is invalid: If you're not using the ConfigureWebHostDefaults() method, you need to configure the HostFilteringOptions yourself, and add the HostFilteringMiddleware manually to your middleware pipeline. You can configure these in Startup.ConfigureServices(). For example, the following uses the "AllowedHosts" configuration setting, in a similar way to the defaults: public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public void ConfigureServices(IServiceCollection services) { // ..other config // "AllowedHosts": "localhost;127.0.0.1;[::1]" var hosts = Configuration["AllowedHosts"]? .Split(new[] { ';' }, StringSplitOptions.RemoveEmptyEntries); if(hosts?.Length > 0) { services.Configure<HostFilteringOptions>( options => options.AllowedHosts = hosts; } } public void Configure(IApplicationBuilder app) { // Should be first in the pipeline app.UseHostFiltering(); // .. other config } } This is very similar to the default configuration used by ConfigureWebHostDefaults(), though it doesn't allow changing the configured hosts at runtime. See the default implementation if that's something you need. The default configuration uses an IStartupFilter, HostFilteringStartupFilter, to add the hosting middleware, but it's internal, so you'll have to make do with the approach above. Host filtering is especially important when you're running Kestrel on the edge, without a reverse proxy, as in most cases a reverse proxy will manage the host filtering for you. Depending on the reverse proxy you use, you may need to set the ForwardedHeadersOptions.AllowedHosts value, to restrict the allowed values of the X-Forwarded-Host header. You can read more about configuring the forwarded headers and a reverse proxy in the documentation. Summary In this post I described Kestrel's default behaviour, of binding to a port not a domain. I then showed how this behaviour can be used as an attack vector by generating malicious links, if you don't filter requests to only a limited number of hosts. Finally, I showed how to enable host filtering by setting the AllowedHosts value in configuration, or by manually adding the HostFilteringMiddleware.
https://andrewlock.net/adding-host-filtering-to-kestrel-in-aspnetcore/
CC-MAIN-2021-04
en
refinedweb
Dividing Without Divide June 25, 2019 Today’s task comes from a student programmer: Given a numerator and divisor of unsigned integers, compute the quotient and remainder. You cannot use divide, cannot use mod, and you want to optimize for speed. After a while, the student programmer looked up the official solution: def divide_problem(num, div): quotient = 1 while num - (quotient * div) >= div: print(num - (quotient * div), "loop") quotient += 1 remainder = num - (quotient * div) return quotient, remainder Your task is to comment on the official solution and write a better one. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below..
https://programmingpraxis.com/2019/06/25/dividing-without-divide/
CC-MAIN-2021-04
en
refinedweb
Creating a function for consuming data and generating Spring Cloud Stream Sink applications This is part 4 of the blog series in which we are introducing java functions for Spring Cloud Stream applications. Other parts in the series. In the last blog in this series, we saw how we can use a java.util.function.Supplier to generate a Spring Cloud Stream source. In this new edition, we will see how a consuming function can be developed and tested using java.util.function.Consumer and java.util.function.Function. Later on, we will briefly explain the generation of a Spring Cloud Stream sink application from this consumer. Writing a Consumer The idea behind writing a consumer is relatively simple. We consume data from some external source and hand it over to the business logic in the consumer. As in the case of a Supplier, as we saw in the previous blog, the action occurs inside the business logic implementation. If we use libraries to help us do all the heavy lifting such as Spring Integration, then it becomes a matter of simply delegating the data received to the library through an appropriate API. However, if there are no such libraries available, we need to write all that code by ourselves. Let’s take a concrete example to demonstrate this. Writing a Consumer for Apache Pulsar Apache Pulsar is a popular messaging middleware system. Let’s assume for a moment that we want to write a generic Java Consumer that receives data from somewhere and then forwards it to Pulsar. Without getting too much into the details , here is a trivial Consumer that accomplishes this. The basic implementation code is taken from here. @Bean public org.apache.pulsar.client.api.Producer producer() { String pulsarBrokerRootUrl = "pulsar://localhost:6650"; PulsarClient client = PulsarClient.create(pulsarBrokerRootUrl); String topic = "persistent://sample/standalone/ns1/my-topic"; return client.createProducer(topic); } @Bean public Consumer<byte[]> pulsarConsumer(Producer producer) { return payload -> { producer.send(payload); }; } Once again, this is shown for illustrative purposes and may not be a complete implementation of sending data to Apache Pulsar. Nevertheless, this demonstrates the concepts that we want to convey. Looking at the consumer, we can see that the code is trivial; all we are doing inside the lambda expression is calling the send method on the Apache Pulsar Producer. We can inject the above consumer into an application and invoke it’s accept method programmatically, providing the data. As we have seen in the previous blog, the diagram below expresses an idea of running the function standalone or as part of a data orchestration pipeline on platforms like Spring Cloud Data Flow. Ok, that consumer was pretty straightforward, we might think to ourselves. What about if we would like to do something where things are a tad more involved? Below, we will do exactly that. Writing a consuming function for RSocket RSocket is a bi-directional binary protocol for which Spring Framework provides excellent support. RSocket provides a fire and forget model, allowing us to send messages to a RSocket server without receiving a response. We want to write a consumer for this model using TCP where the consumer receives external data and then pushes it to the RSocket server. The Java implementation for RSocket is based on Project Reactor.Therefore when we write a consumer we need to use reactive types and patterns (similar to the reactive feed supplier in the previous blog). When using the fire and forget strategy, RSocket returns a Mono<Void>, which our consumer needs to return from the function. However, in the case of java.util.function.Consumer, we cannot return anything. Therefore we have to write a function with the signature Function<String, Mono<Void>> rsocketConsumer(). Since the function returns a Mono<Void>, this is semantically equivalent to writing a consumer. The user of the function needs to get a reference to the Mono and subscribe to it. Similar patterns are used in the out of the box consumers, we already provide for MongoDB and Cassandra. When setting up the project, include the following maven dependency. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-rsocket</artifactId> </dependency> This starter dependency from Spring Boot will transitively bring all the RSocket dependencies to our project. Before we write the function code, let’s write a ConfigurationProperties class to define some core properties that the function needs. @ConfigurationProperties("rsocket.consumer") public class RsocketConsumerProperties { private String host = "localhost"; private int port = 7000; private String route; … } As we can see, using the prefix rsocket.consumer , we define three properties - host and port are for the RSocket server and route is an endpoint on the server. Now that we have the configuration properties , let’s create a Configuration class to configure our function bean. @Configuration @EnableConfigurationProperties(RsocketConsumerProperties.class) public class RsocketConsumerConfiguration { @Bean public Function<String, Mono<Void>> rsocketConsumer(RSocketRequester.Builder builder, RsocketConsumerProperties rsocketConsumerProperties) { final Mono<RSocketRequester> rSocketRequester = builder.connectTcp(rsocketConsumerProperties.getHost(), rsocketConsumerProperties.getPort()); return input -> rSocketRequester .flatMap(requester -> requester.route(rsocketConsumerProperties.getRoute()) .data(input) .send()); } } We are injecting a builder that comes from Spring Boot auto-configuration into the function that helps us with creating an RSocketRequester. Using this builder we create a Mono<RSocketRequester> with a TCP connection. The connectTcp API method takes the RSocket host and port information. Once we get a handle onto RSocketRequester then we use that inside the lambda provided in the function. We call flatMap on Mono<RSocketRequester> and for each incoming message, we specify the route and the data that needs to be sent before calling the send method that ultimately pushes the data to the RSocket server. That’s all it takes to write a function that consumes data and then sends it to a RSocket server using the fire and forget interaction model. Keep in mind that this code looks very simple because of the various RSocket support and abstractions that Spring Framework underneath provides us. Let’s write a quick test to verify that the function works as expected. As we did with the reactive supplier in the previous blog, add this following dependency to the project. This helps us with testing reactive components. <dependency> <groupId>io.projectreactor</groupId> <artifactId>reactor-test</artifactId> <scope>test</scope> </dependency> Following is the test with other necessary components. @SpringBootTest(properties = {"spring.rsocket.server.port=7000", "rsocket.consumer.route=test-route"}) public class RsocketConsumerTests { @Autowired Function<Message<?>, Mono<Void>> rsocketConsumer; @Autowired TestController controller; @Test void testRsocketConsumer() { rsocketConsumer.apply(new GenericMessage<>("Hello RSocket")) .subscribe(); StepVerifier.create(this.controller.fireForgetPayloads) .expectNext("Hello RSocket") .thenCancel() .verify(); } @SpringBootApplication @ComponentScan static class RSocketConsumerTestApplication{} @Controller static class TestController { final ReplayProcessor<String> fireForgetPayloads = ReplayProcessor.create(); @MessageMapping("test-route") void someMethod(String payload) { this.fireForgetPayloads.onNext(payload); } } } A quick explanation of the testing components. We provide the property spring.rsocket.server.porton SpringBootApplication. This allows Spring Boot to auto-configure a default RSocket server for testing. Hard coding the port to 7000here since that is the default port used by Spring Boot when auto configuring the components. This is the same default we used in properties above. We also specify the routethat we want to use in our test. There is a Controllerprovided with a method annotated with MessageMappingwhere it intercepts messages arriving at the route that we specified in the test. Each incoming record on the Server at the route is passed into a Fluxwhere it can be replayed later in the test during assertion. In the test, we are calling the applymethod on the injected RSocketconsumer that we wrote earlier and providing it with a test message. Finally, we use a StepVerifierto verify that the message was sent successfully to the RSocketserver. Generating Spring Cloud Stream Sink application from the RSocket Consumer In the last blog, we covered how to generate a Spring Cloud Stream source application from a Supplier function in much detail. You can follow the same patterns that we used there for generating a sink application from the RSocket function that we wrote above. We are not rehashing all the details involved here. Use the many different sink applications provided here as a template. When we test the function with the test binder in Spring Cloud Stream, send the message to the InputDestination. Spring Cloud Stream will send it downstream to the RSocket server. Then we can use the same verification strategies we used in the unit test above. See this for more information on testing Spring Cloud Stream components using the test binder. Conclusion In this blog post, we saw how we can write a plain consumer that consumes data and acts upon it, using Apache Pulsar as an example. We then explored how to develop a reactive consumer in the form of Function<String, Mono<Void>> with RSocket fire and forget strategy to guide us. We also demonstrated how this reactive consumer can be unit tested. Please follow the procedures laid out in this article for writing your own data consumers and if you do so, consider contributing a pull request.
https://spring.io/blog/2020/08/03/creating-a-function-for-consuming-data-and-generating-spring-cloud-stream-sink-applications
CC-MAIN-2021-04
en
refinedweb
Java has a final keyword that serves three purposes. When you use final with a variable, it creates a constant whose value can't be changed after it has been initialized. The other two uses of the final keyword are to create final methods and final classes. A final method is a method that can't be overridden by a subclass. To create a final method, you simply add the keyword final to the method declaration. For example: public class Car { public final int getVelocity() { return this.velocity; } } Here the method getVelocity is declared as final. Thus, any class that uses the Car class as a base class can't override the getVelocity() method. If it tries, the compiler issues the error message:"Overridden method is final". Here are some additional details about final methods: Final methods execute more efficiently than nonfinal methods because the compiler knows at compile time that a call to a final method won't be overridden by some other method. Private methods are automatically considered to be final because you can't override a method you can't see. to be final, all of its methods are considered to be final as well. Because you can't use a final class as the base class for another class, no class can possibly override any of the methods in the final class. Thus all the methods of a final class are final methods.PreviousNext
https://www.demo2s.com/java/java-final-keyword.html
CC-MAIN-2021-04
en
refinedweb
Draft Clone Description The Draft Clone tool produces linked copies of a selected shape. This means that if the original object changes its shape and properties, all clones change as well. Nevertheless, each clone retains its unique position, rotation, and scale, as well as its view properties like shape color, line width, and transparency. The Clone tool can be used on 2D shapes created with the Draft Workbench, but can also be used on many types of 3D objects such as those created with the Part, PartDesign, or Arch Workbenches. To create simple copies, that are completely independent from an original object, use Draft Move, Draft Rotate, and Draft Scale. To position copies in an orthogonal array use Draft Array; to position copies along a path use Draft PathArray; to position copies at specified points use Draft PointArray. Clone next to the original object Usage - Select an object that you wish to clone. - Press the Draft Clone button. Depending on its options, the Draft Scale tool also creates a clone at a specified scale. Clones of 2D objects created with the Draft or Sketcher Workbenches will also be 2D objects, and therefore can be used as such for the PartDesign Workbench. All Arch Workbench objects have the possibility to behave as clones by using their DataCloneOf property. If you use the Draft Clone tool on a selected Arch object, you will produce such an Arch clone instead of a regular Draft clone. Limitations Currently, Sketcher Sketches cannot be mapped to the faces of a clone. Options There are no options for this tool. Either it works with the selected objects or not. Properties - DataObjects: specifies a list of base objects which are being cloned. - DataScale: specifies the scaling factor for the clone, in each X, Y, and Z direction. - DataFuse: if it is True and DataObjects includes many shapes that intersect each other, the resulting clone will be fuse them together into a single shape, or make a compound of them. introduced in version 0.17 Scripting See also: Draft API and FreeCAD Scripting Basics. The Clone tool can be used in macros and from the Python console by using the following function: cloned_object = clone(obj, delta=None, forcedraft=False) - Creates a cloned_objectfrom obj, which can be a single object or a list of objects. - If given, deltais a FreeCAD.Vectorthat moves the new clone away from the original position of the base object. - If forcedraftis True, the resulting object will be a Draft clone, and not an Arch clone, even if objis an Arch Workbench object. The fusion of the objects that are part of the clone can be achieved by setting its Fuse attribute to True. Example: import FreeCAD, Draft place = FreeCAD.Placement(FreeCAD.Vector(1000, 0, 0), FreeCAD.Rotation()) Polygon1 = Draft.makePolygon(3, 750) Polygon2 = Draft.makePolygon(5, 750, placement=place) obj = [Polygon1, Polygon2] vector = FreeCAD.Vector(2600, 500, 0) cloned_object = Draft.clone(obj, delta=vector) cloned_object.Fuse = True FreeCAD.ActiveDocument.recompute() -
https://wiki.freecadweb.org/Draft_Clone/pt-br
CC-MAIN-2020-24
en
refinedweb
How to Speed up Pandas by 4x with one line of code While Pandas is the library for data processing in Python, it isn't really built for speed. Learn more about the new library, Modin, developed to distribute Pandas' computation to speedup your data prep. Pandas is the go-to library for processing data in Python. It’s easy to use and quite flexible when it comes to handling different types and sizes of data. It has tons of different functions that make manipulating data a breeze. The popularity of various Python packages over time. Source But there is one drawback: Pandas is slow for larger datasets. By default, Pandas executes its functions as a single process using a single CPU core. That works just fine for smaller datasets since you might not notice much of a difference in speed. But with larger datasets and so many more calculations to make, speed starts to take a major hit when using only a single core. It’s doing just one calculation at a time for a dataset that can have millions or even billions of rows. Yet most modern machines made for Data Science have at least 2 CPU cores. That means, for the example of 2 CPU cores, that 50% or more of your computer’s processing power won’t be doing anything by default when using Pandas. The situation gets even worse when you get to 4 cores (modern Intel i5) or 6 cores (modern Intel i7). Pandas simply wasn’t designed to use that computing power effectively. Modin is a new library designed to accelerate Pandas by automatically distributing the computation across all of the system’s available CPU cores. With that, Modin claims to be able to get nearly linear speedup to the number of CPU cores on your system for Pandas DataFrames of any size. Let’s see how it all works and go through a few code examples. How Modin Does Parallel Processing With Pandas Given a DataFrame in Pandas, our goal is to perform some kind of calculation or process on it in the fastest way possible. That could be taking the mean of each column with .mean(), grouping data with groupby, dropping all duplicates with drop_duplicates(), or any of the other built-in Pandas functions. In the previous section, we mentioned how Pandas only uses one CPU core for processing. Naturally, this is a big bottleneck, especially for larger DataFrames, where the lack of resources really shows through. In theory, parallelizing a calculation is as easy as applying that calculation on different data points across every available CPU core. For a Pandas DataFrame, a basic idea would be to divide up the DataFrame into a few pieces, as many pieces as you have CPU cores, and let each CPU core run the calculation on its piece. In the end, we can aggregate the results, which is a computationally cheap operation. How a multi-core system can process data faster. For a single-core process (left), all 10 tasks go to a single node. For the dual-core process (right), each node takes on 5 tasks, thereby doubling the processing speed. That’s exactly what Modin does. It slices your DataFrame into different parts such that each part can be sent to a different CPU core. Modin partitions the DataFrames across both the rows and the columns. This makes Modin’s parallel processing scalable to DataFrames of any shape. Imagine if you are given a DataFrame with many columns but fewer rows. Some libraries only perform the partitioning across rows, which would be inefficient in this case since we have more columns than rows. But with Modin, since the partitioning is done across both dimensions, the parallel processing remains efficient all shapes of DataFrames, whether they are wider (lots of columns), longer (lots of rows), or both. A Pandas DataFrame (left) is stored as one block and is only sent to one CPU core. A Modin DataFrame (right) is partitioned across rows and columns, and each partition can be sent to a different CPU core up to the max cores in the system. The figure above is a simple example. Modin actually uses a Partition Manager that can change the size and shape of the partitions based on the type of operation. For example, there might be an operation that requires entire rows or entire columns. In that case, the Partition Manager will perform the partitions and distribution to CPU cores in the most optimal way it can find. It’s flexible. To do a lot of the heavy lifting when it comes to executing the parallel processing, Modin can use either Dask or Ray. Both of them are parallel computing libraries with Python APIs, and you can select one or the other to use with Modin at runtime. Ray will be the safest one to use for now as it is more stable — the Dask backend is experimental. But hey, that’s enough theory. Let’s get to the code and speed benchmarks! Benchmarking Modin Speed The easiest way to install and get Modin working is via pip. The following command installs Modin, Ray, and all of the relevant dependencies: pip install modin[ray] For our following examples and benchmarks, we’re going to be using the CS:GO Competitive Matchmaking Data from Kaggle. Each row of the CSV contains data about a round in a competitive match of CS:GO. We’ll stick to experimenting with just the biggest CSV file for now (there are several) called esea_master_dmg_demos.part1.csv, which is 1.2GB. With such a size, we should be able to see how Pandas slows down and how Modin can help us out. For the tests, I’ll be using an i7–8700k CPU, which has 6 physical cores and 12 threads. The first test we’ll do is simply reading in the data with our good’ol read_csv().. Not too shabby for just changing the import statement! Let’s do a couple of heavier processes on our DataFrame. Concatenating multiple DataFrames is a common operation in Pandas — we might have several or more CSV files containing our data, which we then have to read one at a time and concatenate. We can easily do this with the pd.concat() function in Pandas and Modin. We’d expect that Modin should do well with this kind of an operation since it’s handling a lot of data. The code is shown below. In the above code, we concatenated our DataFrame to itself 5 times. Pandas was able to complete the concatenation operation in 3.56 seconds while Modin finished in 0.041 seconds, an 86.83X speedup! It appears that even though we only have 6 CPU cores, the partitioning of the DataFrame helps a lot with the speed. A Pandas function commonly used for DataFrame cleaning is the .fillna() function. This function finds all NaN values within a DataFrame and replaces them with the value of your choice. There’s a lot of operations going on there. Pandas has to go through every single row and column to find NaN values and replace them. This is a perfect opportunity to apply Modin since we’re repeating a very simple operation many times. This time, Pandas ran the .fillna() in 1.8 seconds while Modin took 0.21 seconds, an 8.57X speedup! A caveat and final benchmarks So is Modin always this fast? Well, not always. There are some cases where Pandas is actually faster than Modin, even on this big dataset with 5,992,097 (almost 6 million) rows. The table below shows the run times of Pandas vs. Modin for some experiments I ran. As you can see, there were some operations in which Modin was significantly faster, usually reading in data and finding values. Other operations, such as performing statistical calculations, were much faster in Pandas. Practical Tips for using Modin Modin is still a fairly young library and is constantly being developed and expanded. As such, not all of the Pandas functions have been fully accelerated yet. If you try and use a function with Modin that is not yet accelerated, it will default to Pandas, so there won’t be any code bugs or errors. For the full list of Pandas methods that are supported by Modin, see this page. By default, Modin will use all of the CPU cores available on your machine. There may be some cases where you wish to limit the number of CPU cores that Modin can use, especially if you want to use that computing power elsewhere. We can limit the number of CPU cores Modin has access to through an initialization setting in Ray since Modin uses it on the backend. import ray ray.init(num_cpus=4) import modin.pandas as pd When working with big data, it’s not uncommon for the size of the dataset to exceed the amount of memory (RAM) on your system. Modin has a specific flag that we can set to true, which will enable its out of core mode. Out of core basically means that Modin will use your disk as overflow storage for your memory, allowing you to work with datasets far bigger than your RAM size. We can set the following environment variable to enable this functionality: export MODIN_OUT_OF_CORE=true Conclusion So there you have it! Your guide to accelerating Pandas functions using Modin. Very easy to do by changing just the import statement. Hopefully, you find Modin useful in at least a few situations to accelerate your Pandas functions. Related:
https://www.kdnuggets.com/2019/11/speed-up-pandas-4x.html
CC-MAIN-2020-24
en
refinedweb
Gluu Server Backup# The Gluu Server should be backed up frequently--we recommend at least one daily and one weekly backup of Gluu's data and/or VM. There are multiple methods for backing up the Gluu Server. A few recommended strategies are provided below. VM Snapshot Backup# In the event of a production outage, a proper snapshot of the last working condition will help rapidly restore service. Most platform virtualization software and cloud vendors have snapshot backup features. For instance, Digital Ocean has Live Snapshot and Droplet Snapshot; VMWare has Snapshot Manager, etc. Snaphots should be taken for all Gluu environments (e.g. Prod, Dev, QA, etc.) and tested periodically to confirm consistency and integrity. Tarball Method# All Gluu Server files live in a single folder: /opt. The entire Gluu Server CE chroot folder can be archived using the tar command: Stop the server: service gluu-server stopor /sbin/gluu-serverd stop Use tarto take a backup: tar cvf gluu40-backup.tar /opt/gluu-server/ Start the server again: service gluu-server startor /sbin/gluu-serverd start LDIF Data Backup# From time to time (daily or weekly), the LDAP database should be exported in a standard LDIF format. Having the data in plain text offers some options for recovery that are not possible with a binary backup. Instructions are provided below for exporting OpenDJ data. The below instructions address situations where unused and expired cache and session related entries are piling and causing issues with functionality. Read more about this issue. OpenDJ# Errors that this may help fix include but are not restricted to: - Out of Memory If your Gluu Server is backed by OpenDJ, follow these steps to backup your data: First check your cache entries by running the following command: /opt/opendj/bin/ldapsearch -h localhost -p 1636 -Z -X -D "cn=directory manager" -w <password> -b 'o=gluu' -T 'oxAuthGrantId=*' dn | grep 'dn:' | wc -l Dump the data as LDIF - Log in to root: sudo su - - Log in to Gluu-Server-4.1: service gluu-server login or /sbin/gluu-serverd login Stop the identity, oxauthand opendjservices If you are moving to a new LDAP, copy over your schema files from the following directory. Otherwise simply copy it for backup: /opt/opendj/config/schema/ - Now export the LDIF and save it somewhere safe. You will not be importing this if you choose to apply any filters as below: /opt/opendj/bin/export-ldif -n userRoot -l exactdatabackup_date.ldif - Now exclude oxAuthGrantIdso the command becomes: /opt/opendj/bin/export-ldif -n userRoot -l yourdata_withoutoxAuthGrantId.ldif --includeFilter '(!(oxAuthGrantId=*))' - You may also wish to exclude oxMetricso the command becomes: /opt/opendj/bin/export-ldif -n userRoot -l yourdata_withoutGrantIdMetic.ldif --includeFilter '(&(!(oxAuthGrantId=*))(! (objectClass=oxMetric)))' Now, only if needed, rebuild indexes: - Check status of indexes: /opt/opendj/bin/backendstat show-index-status --backendID userRoot --baseDN o=gluu Take note of all indexes that need to be rebuilt. If no indexing is needed, move on to step 4. - Build backend index for all indexes that need it accoring to previous status command, change passoword -wand index name accordingly. This command has to be run for every index separately: /opt/opendj/bin/dsconfig create-backend-index --port 4444 --hostname localhost --bindDN "cn=directory manager" -w password --backend-name userRoot --index-name iname --set index-type:equality --set index-entry-limit:4000 --trustAll --no-prompt - Rebuild the indexes as needed, here are examples : /opt/opendj/bin/rebuild-index --baseDN o=gluu --index iname /opt/opendj/bin/rebuild-index --baseDN o=gluu --index uid /opt/opendj/bin/rebuild-index --baseDN o=gluu --index mail - Check status again : /opt/opendj/bin/backendstat show-index-status --backendID userRoot --baseDN o=gluu - Verify indexes: /opt/opendj/bin/verify-index --baseDN o=gluu --countErrors Next import your previously exported LDIF. Here, we are importing without oxAuthGrantId. Note You may import the exact export of your ldap exactdatabackup_date.ldif.Do not import your exact copy of your LDIF if you are following instructions to to clean your cache entries /opt/opendj/bin/import-ldif -n userRoot -l yourdata_withoutoxAuthGrantId.ldif If you moved to a new LDAP, copy back your schema files to this directory: ```bash /opt/opendj/config/schema/ ``` Start the identity, oxauthand opendjservices Finally, verify the cache entries have been removed: /opt/opendj/bin/ldapsearch -h localhost -p 1636 -Z -X -D "cn=directory manager" -w <password> -b 'o=gluu' -T 'oxAuthGrantId=*' dn | grep 'dn:' | wc –l You should be done and everything should be working perfectly. You may notice your Gluu Server responding slower than before. That is expected -- your LDAP is adjusting to the new data, and indexing might be in process. Give it some time and it should be back to normal. Backing up data and restoring from backup Kubernetes instructions# Overview# This guide introduces how to backup data and restore from a backup file. Couchbase# Install backup strategy# A typical installation of Gluu using pygluu-kubernetes.pyz will automatiically install a backup strategy that will backup Couchbase every 5 mins to a persistent volume. However, the Couchbase backup can be setup manually: Download pygluu-kubernetes.pyz. This package can be built manually. Run : ./pygluu-kubernetes.pyz install-couchbase-backup Note ./pygluu-kubernetes.pyz install-couchbase-backup will not install couchbase. Uninstall backup strategy# A file named couchbase-backup.yaml will have been generated during installation of backup strategy. Use that as follows to remove the backup strategy: kubectl delete -f ./couchbase-backup.yaml Restore from backup# Please save a copy of the configurations to a file. Note An existing Gluu setup must exist for this to work. Please do not attempt to delete any resources and be very careful in handling Gluu configurations and secrets. Couchbase restore step# Install a new Couchbase if needed. ./pygluu-kubernetes.pyz install-couchbase Create a pod definition file called restore-cb-pod.yamland paste the below yaml changing the volumes, volumeMountsand namespaceif they are different. Note ./pygluu-kubernetes.pyz install-couchbase-backupuses the volumesand volumeMountsas seen in the yaml below apiVersion: v1 kind: Pod metadata: name: restore-node namespace: cbns spec: # specification of the pod's contents containers: - name: restore-pod image: couchbase/server:enterprise-6.5.0 # Just spin & wait forever command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] volumeMounts: - name: "couchbase-cluster-backup-volume" mountPath: "/backups" volumes: - name: couchbase-cluster-backup-volume persistentVolumeClaim: claimName: backup-pvc restartPolicy: Never Apply restore-cb-pod.yaml. kubectl apply -f restore-cb-pod.yaml Access the restore-nodepod. kubectl exec -it restore-node -n cbns -- /bin/bash Choose the backup of choice cbbackupmgr list --archive /backups --repo couchbase We will choose the oldest we received from the command above 2020-02-20T10_05_13.781131773Z Preform the restore using the cbbackupmgrcommand. cbbackupmgr restore --archive /backups --repo couchbase --cluster cbgluu.cbns.svc.cluster.local --username admin --password passsword --start 2020-02-20T10_05_13.781131773Z --end 2020-02-20T10_05_13.781131773Z Learn more about cbbackupmgrcommand and its options. Once done delete the restore-nodepod. kubectl delete -f restore-cb-pod.yaml -n cbns Gluu restore step# Download pygluu-kubernetes.pyz. This package can be built manually. Run : ./pygluu-kubernetes.pyz restore OpenDJ / Wren:DS# Install backup strategy# A typical installation of Gluu using pygluu-kubernetes.pyz will automatiically install a backup strategy that will backup opendj / wren:ds every 10 mins /opt/opendj/ldif. However, the couchbase backup can be setup manually: Download pygluu-kubernetes.pyz. This package can be built manually. Run : ./pygluu-kubernetes.pyz install-ldap-backup Note Up to 6 backups will be stored at /opt/opendj/ldif on the running opendj pod. The backups will carry the name backup-0.ldif to backup-5.ldif and will be overwritten to save data. Uninstall backup strategy# A file named ldap-backup.yaml will have been generated during installation of backup strategy. Use that as follows to remove the backup strategy: kubectl delete -f ./couchbase-backup.yaml Restore from backup# Note An existing Gluu setup must exist for this to work. Please do not attempt to delete any resources and be very careful in handling Gluu configurations and secrets. OpenDJ / Wren:DS restore step# Opendj volume attached should carry the backups at /opt/opendj/ldif If this is a fresh installation , attach the older volume to the new pod. Access the opendj pod. kubectl exec -ti opendj-0 -n gluu /bin/sh Choose the backup of choice and rename it to backup-this-copy.ldif. The pygluu-kubernetes.pyzwill preform the import. ls /opt/opendj/ldif cd /opt/opendj/ldif cp backup-1.ldif backup-this-copy.ldif Run : ./pygluu-kubernetes.pyz restore
https://gluu.org/docs/gluu-server/operation/backup/
CC-MAIN-2020-24
en
refinedweb
A reachability analysis for control-flow nodes involving stack variables. This defines sources, sinks, and any other configurable aspect of the analysis. Multiple analyses can coexist. To create an analysis, extend this class with a subclass whose characteristic predicate is a unique singleton string. For example, write class MyAnalysisConfiguration extends LocalScopeVariableReachability { MyAnalysisConfiguration() { this = "MyAnalysisConfiguration" } // Override `isSource` and `isSink`. // Override `isBarrier`. } Then, to query whether there is flow between some source and sink, call the reaches predicate on an instance of MyAnalysisConfiguration. Import path import semmle.code.cpp.controlflow.LocalScopeVariableReachability
https://help.semmle.com/qldoc/cpp/semmle/code/cpp/controlflow/LocalScopeVariableReachability.qll/type.LocalScopeVariableReachability$LocalScopeVariableReachability.html
CC-MAIN-2020-24
en
refinedweb
Make your messaging app available for share sheet suggestions and use SiriKit intents to populate your app’s share extension. Framework - UIKit Overview Using the iOS share sheet, users can launch your messaging app instantly from a list of suggestions when sharing content like a link, image, video, or file. The share sheet suggests conversations with people in apps that the user interacts with frequently, and updates its suggestions over time based on the user’s favorite apps and conversations. Figure 1 shows suggestions that include a mix of apps and conversations with contacts. A share sheet provides a list of suggestions To allow iOS to include conversations from your messaging app in the list of suggestions: Add a share extension to your app as described in Understand Share Extensions. Declare support for the INSendintent type. Message Intent Donate an INSendin your messaging app and its share extension. Message Intent As the user selects an app from the list of suggestions, the app’s sharing interface, implemented as a share extension, accesses additional metadata. iOS provides the INSend for you to prepopulate the interface of your app’s share extension. For example, you can access the conversation property and preselect a conversation with which to share content so the user doesn’t need to search for a contact in a list, or type a friend’s name. Figure 2 shows an app’s sharing interface using the default system UI that’s based on an SLCompose, with the recipient (Juan Chavez) already filled in. Prepopulating the share extension's interface with a conversation Add a Share Extension to Your App To add a share extension to your app, open your app’s project in Xcode and select File > New > Target from the menu bar. Xcode presents a sheet that contains templates for different kinds of targets. Select the share extension template from the iOS pane and follow the steps in Xcode’s interface to add one to your app. To learn more, see Understand Share Extensions in the App Extension Programming Guide. Support the Send Message Intent Open the Share extension’s Info, and expand the NSExtension and NSExtensionAttributes keys. Next, add a new entry with the Intents key and select Array for its value type. Add the string INSend as a new value to the array to declare support for the INSend intent type, as shown in Figure 3. Add a new entry under the NSExtensionAttributes key To make debugging easier, the value of NSExtensionActivationRule is set to TRUEPREDICATE when you first add a share extension using Xcode’s template. Replace it with valid activation rules as described in Declaring Supported Data Types for a Share or Action Extension before submitting your app for review. Donate a Send Message Intent Donate an INSend when the user sends a message in your app and its share extension, and not in any other circumstances. For example, don’t donate an intent when the user hasn’t actually sent a message. As you initialize the INSend object, provide metadata that will be available later when the user choses your app’s share extension from the list of suggestions. The following code donates an INSend with a group, a conversation, and an INImage. // Create an INSendMessageIntent to donate an intent for a conversation with Juan Chavez. let groupName = INSpeakableString(spokenPhrase: "Juan Chavez") let sendMessageIntent = INSendMessageIntent(recipients: nil, content: nil, speakableGroupName: groupName, conversationIdentifier: "sampleConversationIdentifier", serviceName: nil, sender: nil) // Add the user's avatar to the intent. let image = INImage(named: "Juan Chavez") sendMessageIntent.setImage(image, forParameterNamed: \.speakableGroupName) // Donate the intent. let interaction = INInteraction(intent: sendMessageIntent, response: nil) interaction.donate(completion: { error in if error != nil { // Add error handling here. } else { // Do something, e.g. send the content to a contact. } }) When iOS includes a conversation within your app as a suggestion in the share sheet, it displays your app’s icon along with the INImage you associated with your INSend. If there’s no INImage set on the intent, iOS uses the image property from the INPerson object for each recipient. If the INPerson object’s image is nil, iOS looks up the corresponding contact in the Contacts app using the person’s contact. The share sheet then uses the contact’s image. Populate Your Share Extension’s Interface with Metadata When the user selects your app from the list of suggestions, you can access the metadata that you created when your app donated the INSend. Use it to populate your share extension’s interface. The following code listing shows a template implementation for an SLCompose subclass. It accesses the intent property, makes sure it’s an INSend, and uses the intent’s conversation to create a new Recipient object. It then uses the Recipient object to populate the share extension’s interface in the configuration method. import Intents import Social import UIKit class ShareViewController: SLComposeServiceViewController { var recipient: Recipient = Recipient(withName: "Placeholder") override func viewDidLoad() { super.viewDidLoad() // Populate the recipient property with the metadata in case the user tapped a suggestion from the share sheet. let intent = self.extensionContext?.intent as? INSendMessageIntent if intent != nil { let conversationIdentifier = intent!.conversationIdentifier self.recipient = recipient(identifier: conversationIdentifier!) } } func recipient(identifier: String) -> Recipient { // Create a recipient object, for example by loading it from a data base. return Recipient(withName: identifier) } override func isContentValid() -> Bool { // Do validation of contentText and/or NSExtensionContext attachments here return true } override func didSelectPost() { // This is called after the user selects Post. Do the upload of contentText and/or NSExtensionContext attachments. // Inform the host that we're done, so it un-blocks its UI. // Note: Alternatively you could call super's -didSelectPost, which will similarly complete the extension context. self.extensionContext!.completeRequest(returningItems: [], completionHandler: nil) } override func configurationItems() -> [Any]! { // To add configuration options via table cells at the bottom of the sheet, return an array of SLComposeSheetConfigurationItem here. // Use the Recipient object to populate the share sheet. let item = SLComposeSheetConfigurationItem() item?.title = NSLocalizedString("To:", comment: "The To: label when sharing content.") item?.value = self.recipient.name item?.tapHandler = { self.validateContent() item!.value = self.recipient.name } return [item!] } }
https://developer.apple.com/documentation/foundation/app_extension_support/supporting_suggestions_in_your_app_s_share_extension?changes=_3&language=objc
CC-MAIN-2020-24
en
refinedweb
ADC FiPy difficulty I'm having difficulty getting meaningful ADC values with my FiPy and Expansion board 3. I have a 10k pot connected to ground and 3.3v. The wiper I have tried connecting to many different ADC pins such as P12, P,13, P14, P17, P16. The values are pretty static regardless of what the position of the wiper is. Even if I short the input pin directly to the 3.3v there seems to be no change. Here's what I'm using: from machine import ADC adc = ADC(0) adc_c = adc.channel(pin='P13', attn=ADC.ATTN_11DB) print(adc_c.voltage()) Am I doing something silly here? The expansion board firmware(3.1) is up to date. I haven't looked into upgrading the FiPy yet. Thanks, I didn't realize the pinout on the expansion board was different from the FiPy. PRINT TOO SMALL :-P @peter_ I Cannot confirm your observation. I extended your little test code as follows (no pot), with a connection between P22 and P13: from machine import ADC, DAC from time import sleep adc = ADC(0) adc_c = adc.channel(pin='P13', attn=ADC.ATTN_11DB) dac = DAC('P22') # create a DAC object for _ in range( 100): dac.write(_ / 100) print(adc_c.voltage()) sleep(0.1) Not that adc.voltage() requires a calibration for better values, and that the input of the ADC should be low impedance. If you pot has a high impedance, connect a small capacitor (~10 nF) between ADC input and GND,
https://forum.pycom.io/topic/5045/adc-fipy-difficulty
CC-MAIN-2020-24
en
refinedweb
Using The Request Body With Serverless Functions Dustin Myers Updated on ・8 min read Getting Started With Serverless (2 Part Series) Harnessing the request body within serverless functions really expands what we can do with our apps. Greater Power So far we have seen the most basic setup for serverless functions - returning a set of hardcoded data. In this tutorial, we will look at what we can do with serverless functions to create a more complicated application. We will be using the Star Wars API (SWAPI) to build a multi-page application that will display a list of Star Wars characters, let your user click on the character to open the character page. We will use serverless functions for two purposes here: - Avoid any CORS issues - Add character images to the data provided by SWAPI, since the data does not include images We will need to harness the power of the serverless functions request body as built by Zeit to achieve these lofty goals. Let's get started! Hosting On Zeit The starting code for this tutorial is in this repo here and the deployed instance here. You will need to fork it so you can connect it to a Zeit project. Go ahead and fork it now, then clone the repository to your own machine. From there, use the now cli (download instructions) to deploy the app to Zeit. This will create a new project on Zeit and deploy it for you. This app is built with Zeit's Next.js template. This will allow us to open a dev environment on our own machines for testing and debugging our serverless functions, while still giving us the full Zeit workflow and continuous development environment. After you have cloned the repo, install the dependencies with yarn. Then fire up the app with yarn run dev. This gives you a link you can open up in your browser. You can now use the browser for debugging the Next.js app, and the terminal for debugging your serverless functions. Refactoring To Use Serverless Functions Right now, the app works to display the list of characters, but it is just making the fetch request to SWAPI in the component. Take a look at /pages/index.js. If you are unfamiliar with data fetching in a Next.js app, check out their docs on the subject. We are following those patterns in this app. Instead of the component calling SWAPI, we want to make a request from the app to a serverless function and have the serverless function make the request to SWAPI for us. This will allow us to achieve the two things listed above. Let's go ahead and refactor this to use a serverless function. File structure /pages/api directory To start out, add an /api directory inside the /pages directory. Zeit will use this directory to build and host the serverless functions in the cloud. Each file in this directory will be a single serverless function and will be the endpoint that the app can use to make HTTP requests. get-character-list.js Now inside /pages/api add a new file called get-character-list.js. Remember adding API files in the last tutorial? Just like that, we can send HTTP requests to the serverless function that will be housed in this file using the endpoint "/api/get-character-list". The serverless function Now let's build the get-character-list function. The function will start out like this: export default (req, res) => {}; Inside this function is where we want to fetch the data for the star wars characters. Then we will return the array of characters to the client. I have set up a fetchCharacters function outside of the default function. I call that from the default function and then use the res object to return the character data. Note that we are using "node-fetch" here to give us our wonderful fetch syntax as this is a node function. const fetch = require("node-fetch"); const fetchCharacters = async () => { const res = await fetch(""); const { results } = await res.json(); return results; }; export default async (req, res) => { try { const characters = await fetchCharacters(); res.status(200).json({ characters }); } catch (error) { res.status(500).json({ error }); } }; Inside the serverless function, let's add a couple of console.logs so you can see the function at work within your terminal. const fetch = require("node-fetch"); const fetchCharacters = async () => { const res = await fetch(""); const { results } = await res.json(); // ADD ONE HERE console.log(results); return results; }; export default async (req, res) => { try { const characters = await fetchCharacters(); // ADD ONE HERE console.log(characters) res.status(200).json({ characters }); } catch (error) { res.status(500).json({ error }); } }; When you have a chance to watch those logs happen, go ahead and remove them, then move on to the next step. Updating the Next.js app Now that we have our serverless function in place, let's update the call that is happening in /pages/index.js. We need to change the path we provided to useSWR to our serverless function endpoint - "/api/get-character-list". Notice though, that our serverless function is changing the object that will be sent to our app. Inside the effect hook that is setting the data to state, we need to update that as well to expect an object with a characters property. We're getting our data through the serverless function! 😁🎉🔥 Adding thumbnail images The final step for our list page is to add thumbnail images to the data before our serverless function returns the characters to the app. I have collected images for you. You're welcome! const images = [ "", "", "", "", "", "", "", "", "", "" ]; Add this array to your serverless function file, then add a .map() to add these images to the data before you send it back. export default async (req, res) => { try { const list = await fetchCharacters().catch(console.error); // Map over chatacters to add the thumbnail image const characters = list.map((character, index) => ({ ...character, thumbnail: images[index] })); res.status(200).send({ characters }); } catch (error) { console.log({ error }); res.status(500).json({ error }); } }; Using The Request Object Now we will build out the character page. You may have noticed that clicking on a character card navigates you to a character page. The character page URL has a dynamic param /:id. In the /pages/Character/[id].js file we are using Next.js' useRouter hook to get the id param from the URL. We want to make a request to another serverless function which will fetch the character data for us. That function will take in the id of the character we clicked on via query parameters. The serverless function file/endpoint The file structure here will be the same as we've seen thus far. So go ahead and set up a file called /pages/api/get-character-by-id.js. Add a serverless function there. Just have it return some dummy data, like { message: 'hello' } for now. Next add the same useSWR and fetcher functions to [id].js. Make a request to the new function to make sure it's working. Once you see the request happen (you can check it in the network tab in your browser) we can build in the query param and make a request to SWAPI for the character's data. The Query Param The request URL from the page will add a query param for the id. Our endpoint will change to this - /api/get-character-by-id?id=${id}. Then we can grab the id in the serverless function like this - const { id } = req.query. Easy peasy! Your turn Using what you've built so far, and what we just learned about the query param, build out the HTTP request in your component to make a request with the query param. In your serverless function, grab that param from the req object and fetch the data you need from SWAPI, adding the id to the end of the URL (e.g. for Luke Skywalker, your request URL to SWAPI should be). When the data returns, add the correct image to the object and return the data to your app. Finally, build out your component as a character page to display the character data. Go ahead, get working on that. I'll wait! When you're done, scroll down to see my implementation. Solution Great job! Aren't serverless functions awesome! Here is how I implemented everything for this page. // get-character-by-id.js const fetch = require("node-fetch"); // probably should move this to a util file now and just import it :) const images = [ "", "", "", "", "", "", "", "", "", "" ]; const fetchCharacter = async id => { const res = await fetch(`{id}`); const data = await res.json(); return data; }; export default async (req, res) => { const { id } = req.query; // Make sure that id is present if (!id) { res .status(400) .json({ error: "No id sent - add a query param for the id" }); } // fetch the character data and add the image to it try { const character = await fetchCharacter(id).catch(console.error); character.thumbnail = images[id - 1]; res.status(200).send({ character }); } catch (error) { console.log({ error }); res.status(500).json({ error }); } }; // [id].js import { useState, useEffect } from "react"; import { useRouter } from "next/router"; import fetch from "unfetch"; import useSWR from "swr"; import styles from "./Character.module.css"; async function fetcher(path) { const res = await fetch(path); const json = await res.json(); return json; } const Character = () => { const [character, setCharacter] = useState(); const router = useRouter(); const { id } = router.query; // fetch data using SWR const { data } = useSWR(`/api/get-character-by-id?id=${id}`, fetcher); useEffect(() => { if (data && !data.error) { setCharacter(data.character); } }, [data]); // render loading message if no data yet if (!character) return <h3>Fetching character data...</h3>; return ( <main className="App"> <article className={styles.characterPage}> <img src={character.thumbnail} alt={character.name} /> <h1>{character.name}</h1> </article> </main> ); }; export default Character; There we have it! I did not add much to the character page here so that the code block would be somewhat short. But hopefully, you have built it out to display all of the character's cool data! Drop a link to your hosted site in the comments when you finish! Final code can be found in here and the final deployment here. Getting Started With Serverless (2 Part Series) 🎩 JavaScript Enhanced Scss mixins! 🎩 concepts explained In the next post we are going to explore CSS @apply to supercharge what we talk about here....
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dustinmyers/using-the-request-body-with-serverless-functions-220a
CC-MAIN-2020-24
en
refinedweb
Building real time networked games and applications can be challenging. This tutorial will show you how to connect flash clients using Cirrus, and introduce you to some vital techniques. Let's take a look at the final result we will be working towards. Click the start button in the SWF above to create a 'sending' version of the application. Open this tutorial again in a second browser window, copy the nearId from the first window into the textbox, and then click Start to create a 'receiving' version of the application. In the 'receiving' version, you'll see two rotating needles: one red, one blue. The blue needle is rotating of its own accord, at a steady rate of 90°/second. The red needle rotates to match the angle sent out by the 'sending' version. (If the red needle seems particularly laggy, try moving the browser windows so that you can see both SWFs at once. Flash Player runs EnterFrame events at a much lower rate when the browser window is in the background, so the 'sending' window transmits the new angle much less frequently.) Step 1: Getting Started First things first: you need a Cirrus 'developer key', which can be obtained at the Adobe Labs site. This is a text string that is uniquely assigned to you on registration. You will use this in all the programs you write to get access to the service, so it might be best to define it as a constant in one of your AS files, like this: public static const CIRRUS_KEY:String = "<my string here>"; Note that its each developer or development team that needs its own key, not each user of whatever applications you create. Step 2: Connecting to the Cirrus Service We begin by creating a network connection using an instance of the (you guessed it) NetConnection class. This is achieved by calling the connect() method with your previously mentioned key, and the URL of a Cirrus 'rendezvous' server. Since at the time of writing Cirrus uses a closed protocol there is only one such server; its address is rtmfp://p2p.rtmfp.net public class Cirrus { public static const CIRRUS_KEY:String = "<my string here>" private static var netConnection:NetConnection; public static function Init(key:String):void { if( netConnection != null ) return; netConnection = new NetConnection(); try { netConnection.connect("rtmfp://p2p.rtmfp.net", key); } catch(e:Error) {} } } Since nothing happens instantly in network communication, the netConnection object will let you know what it's doing by firing events, specifically the NetStatusEvent. The important information is held in the code property of the event's info object. private function OnStatus(e:NetStatusEvent):void { switch(e.info.code) { case "NetConnection.Connect.Success": break; //The connection attempt succeeded. case "NetConnection.Connect.Closed": break; //The connection was closed successfully. case "NetConnection.Connect.Failed": break; //The connection attempt failed. } } An unsuccessful connection attempt is usually due to certain ports being blocked by a firewall. If this is the case, you have no choice but to report the failure to the user, as they won't be connecting to anyone until the situation changes. Success, on the other hand, rewards you with your very own nearID. This is a string property of the NetConnection object that represents that particular NetConnection, on that particular Flash Player, on that particular computer. No other NetConnection object in the world will have the same nearID. The nearID is like your own personal phone number - people who want to talk to you will need to know it. The reverse is also true: you will not be able to connect to anyone else without knowing their nearID. When you supply someone else with your nearID, they will use it as a farID: the farID is the ID of the client that you are trying to connect to. If someone else gives you their nearID, you can use it as a farID to connect to them. Get it? So all we have to do is connect to a client and ask them for their nearID, and then... oh wait. How do we find out their nearID (to use as our farID) if we're not connected to each other in the first place? The answer, which you'll be suprised to hear, is that it's impossible. You need some kind of third-party service to swap the ids over. Examples would be: - Building a server application to act as a 'lobby' - Cooking something up using NetGroups, which we might look at in a future tutorial Step 3: Using Streams The network connection is purely conceptual and doesn't help us much after the connection has been set up. To actually transfer data from one end of the connection to another we use NetStream objects. If a network connection can be thought of as building a railway between two cities, then a NetStream is a mail train that carrys actual messages down the track. NetStreams are one-directional. Once created they act as either a Publisher (sending information), or a Subscriber (receiving information). If you want a single client to both send and receive information over a connection, you will therefore need two NetStreams in each client. Once created a NetStream can do fancy things like stream audio and video, but in this tutorial we will stick with simple data. If, and only if, we recieve a NetStatusEvent from the NetConnection with a code of NetConnection.Connect.Success, we can create a NetStream object for that connection. For a publisher, first construct the stream using a reference to the netConnection object we just created, and the special pre-defined value. Second, call publish() on the stream and give it a name. The name can be anything you like, it's just there for a subscriber to differentiate between multiple streams coming from the same client. var ns:NetStream = new NetStream(netConnection, NetStream.DIRECT_CONNECTIONS); ns.publish(name, null); To create a subscriber, you again pass the netConnection object into the constructor, but this time you also pass the farID of the client you want to connect to. Secondly, call play() with the name of the stream that corresponds to the name of the other client's publishing stream. To put it another way, if you publish a stream with the name 'Test', the subscriber will have to use the name 'Test' to connect to it. var ns:NetStream = new NetStream(netConnection, farID); ns.play(name); Note how we needed a farID for the subscriber, and not the publisher. We can create as many publishing streams as we like and all they will do is sit there and wait for a connection. Subscribers, on the other hand, need to know exactly which computer in the world they're supposed to be subscribing to. Step 4: Transferring Data Once a publishing stream is set up it can be used to send data. The netstream Send method takes two arguments: a 'handler' name, and a variable length set of parameters. You can pass any object you like as one of these parameters, including basic types like String, int and Number. Complex objects are automatically 'serialized' - that is, they have all their properties recorded on the sending side and then re-created on the recieving side. Arrays and ByteArrays copy just fine too. The handler name corresponds directly to the name of a function that will eventually be called on the receiving side. The variable parameter list corresponds directly to the arguments the receiving function will be called with. So if a call is made such as: var i:int = 42; netStream.send("Test", "Is there anybody there?", i); The receiver must have a method with the same name and a corresponding signature: public function Test(message:String, num:int):void { trace(message + num); } On what object should this receiving method be defined? Any object you like. The NetStream instance has a property called client which can accept any object you assign to it. That's the object on which the Flash Player will look for a method of the corresponding sending name. If there's no method with that name, or if the number of parameters is incorrect, or if any of the argument types cannot be converted to the parameter type, an AsyncErrorEvent will be fired for the sender. Step 5: Pulling everything together Let's consolidate the things we've learned so far by putting everything into some kind of framework. Here's what we want to include: - Connecting to the Cirrus service - Creating publishing and subscribing streams - Sending and receiving data - Detecting and reporting errors In order to receive data we need some way of passing an object into the framework that has member functions which can be called in response to the corresponding send calls. Rather than an arbitrary object parameter, I'm going to code a specific interface. I'm also going to put into the interface some callbacks for the various error events that Cirrus can send out - that way I can't just ignore them. package { import flash.events.ErrorEvent; import flash.events.NetStatusEvent; import flash.net.NetStream; public interface ICirrus { function onPeerConnect(subscriber:NetStream):Boolean; function onStatus(e:NetStatusEvent):void; function onError(e:ErrorEvent):void; } } I want my Cirrus class to be as easy to use as possible, so I want to hide the basic details of streams and connections from the user. Instead, I'll have one class that acts as either a sender or reciever, and which connects the Flash Player to the Cirrus service automatically if another instance hasn't done so already. package { import flash.events.AsyncErrorEvent; import flash.events.ErrorEvent; import flash.events.EventDispatcher; import flash.events.IOErrorEvent; import flash.events.NetStatusEvent; import flash.events.SecurityErrorEvent; import flash.net.NetConnection; import flash.net.NetStream; public class Cirrus { private static var netConnection:NetConnection; public function get nc():NetConnection { return netConnection; } //Connect to the cirrus service, or if the netConnection object is not null //assume we are already connected public static function Init(key:String):void { if( netConnection != null ) return; netConnection = new NetConnection(); try { netConnection.connect("rtmfp://p2p.rtmfp.net", key); } catch(e:Error) { //Can't connect for security reasons, no point retrying. } } public function Cirrus(key:String, iCirrus:ICirrus) { Init(key); this.iCirrus = iCirrus; netConnection.addEventListener(AsyncErrorEvent.ASYNC_ERROR, OnError); netConnection.addEventListener(IOErrorEvent.IO_ERROR, OnError); netConnection.addEventListener(SecurityErrorEvent.SECURITY_ERROR, OnError) netConnection.addEventListener(NetStatusEvent.NET_STATUS, OnStatus); if( netConnection.connected ) { netConnection.dispatchEvent(new NetStatusEvent(NetStatusEvent.NET_STATUS, false, false, {code:"NetConnection.Connect.Success"})); } } private var iCirrus:ICirrus; public var ns:NetStream = null; } } We'll have one method to turn our Cirrus object into a publisher, and another to turn it into a sender: public function Publish(name:String, wrapSendStream:NetStream = null):void { if( wrapSendStream != null ) ns = wrapSendStream; else { try { ns = new NetStream(netConnection, NetStream.DIRECT_CONNECTIONS); }; ns.publish(name, null); } public function Play(farId:String, name:String):void { try { ns = new NetStream(netConnection, farId); }; try { ns.play.apply(ns, [name]); } catch(e:Error) {} } Finally, we need to pass along the events to the interface we created: private function OnError(e:ErrorEvent):void { iCirrus.onError(e); } private function OnStatus(e:NetStatusEvent):void { iCirrus.onStatus(e); } Step 6: Creating a Test Application Consider the following scenario involving two Flash applications. The first app has a needle that steadily rotates around in a circle (like a hand on a clock face). On each frame of the app, the hand is rotated a little further, and also the new angle is sent across the internet to the receiving app. The receiving app has a needle, the angle of which is set purely from the latest message received from the sending app. Here's a question: Do both needles (the needle for the sending app and the needle for the receiving app) always point to the same position? If you answered 'yes', I highly recommend you read on. Let's build it and see. We'll draw a simple needle as a line eminating from the origin (coordinates (0,0)). This way, whenever we set the shape's rotation property the needle will always rotate as if one end is fixed, and also we can easily position the shape by where the centre of rotation should be: private function CreateNeedle(x:Number, y:Number, length:Number, col:uint, alpha:Number):Shape { var shape:Shape = new Shape(); shape.graphics.lineStyle(2, col, alpha); shape.graphics.moveTo(0, 0); shape.graphics.lineTo(0, -length); //draw pointing upwards shape.graphics.lineStyle(); shape.x = x; shape.y = y; return shape; } It's inconvenient to have to set up two computers next to each other so on the receiver we'll actually use two needles. The first (red needle) will act just as in the description above, setting its angle purely from the latest message received; the second (blue needle) will get its initial position from the first rotation message received, but then rotate automatically over time with no further messages, just like the sending needle does. This way, we can see any discrepancy between where the needle should be and where the received rotation messages say it should be, all by starting both apps and then only viewing the receiving app. private var first:Boolean = true; //Called by the receiving netstream when a message is sent public function Data(value:Number):void { shapeNeedleB.rotation = value; if( first ) { shapeNeedleA.rotation = value; first = false; } } private var dateLast:Date = null; private function OnEnterFrame(e:Event):void { if( dateLast == null ) dateLast = new Date(); //Work out the amount of time elapsed since the last frame. var dateNow:Date = new Date(); var s:Number = (dateNow.time - dateLast.time) / 1000; dateLast = dateNow; //Needle A is always advanced on each frame. //But if there is a receiving stream attached, //also transmit the value of the rotation. shapeNeedleA.rotation += 360 * (s/4); if( cirrus.ns.peerStreams.length != 0 ) cirrus.ns.send("Data", shapeNeedleA.rotation); } We'll have a text field on the app that allows the user to enter a farID to connect to. If the app is started without entering a farID it will set itself up as a publisher. That pretty much covers creating the app you see at the top of the page. If you open two browser windows you can copy the id from one window to the other, and set one app to subscribe to the other. It will actually work for any two computers connected to the Internet - but you'll need some way of copying over the nearID of the subscriber. Step 7: Putting a Spanner In the Works If you run both the sender and receiver on the same computer the rotation information for the needle doesn't have far to travel. In fact, the data packets sent out from the sender don't even have to touch the local network at all because they are destined for the same machine. In real-world conditions the data has to to make many hops from computer to computer and with each hop introduced, the likelihood of problems increase. Latency is one such problem. The further the data physically has to travel, the longer it will take to arrive. For a computer based in London, data will take less time to arrive from New York (a quarter of the way around the globe) than from Sydney (half way around the globe). Network congestion is also a problem. When a device on the Internet is operating at saturation point and is asked to transfer yet another packet, it can do nothing but discard it. Software using the internet must then detect the lost packet and ask the sender for another copy, all of which adds lag into the system. Depending on each end of the connection's location in the world, time of day, and available bandwidth the quality of the connection will vary widely. So how do you hope to test for all these different scenarios? The only practical answer is not to go out and try and find all these different conditions, but to re-create a given condition as required. This can be achieved using something called a 'WAN emulator'. A WAN (Wide Area Network) emulator is software that interferes with the network traffic travelling to and from the machine it's running on, in such a way as to attempt to recreate different network conditions. For example, by simply discarding network packets transmitted from a machine, it can emulate the packet loss that might occur at some stage in the real-world transmission of the data. By delaying packets by some amount before they are sent on by the network card, it can simulate various levels of latency. There are various WAN emulators, for various platforms (Windows, Mac, Linux), all licensed in various ways. For the rest of this article I'm going to use the Softperfect Connection Emulator for Windows for two reasons: it's easy to use, and it has a free trial. (The author and Tuts+ are in no way affiliated with the product mentioned. Use at your own risk.) Once your WAN Emulator is installed and running, you can easily test it by downloading some kind of stream (such as Internet radio, or streaming video) and gradually increasing the amount of packet loss. Inevitably the playback will stall once the packet loss reaches some critical value which depends on your bandwidth and the size of the stream. Oh, and please note the following points: - If both the sending and receiving apps are on the same computer the connection will work just fine, but the WAN Emulator will not be able to affect the packets sent between them. This is because (on Windows at least) packets destined for the same computer are not sent to the network device. A sender and receiver on the same local network works fine, however - plus you can copy the nearID to a text file so you don't have to write it down. - These days, when a broswer window is minimized, the browser artificially reduces the framerate of the SWF. Keep the browser window visible on screen for consistent results. SoftPerfect emulator showing packet loss In the normal state you will see the red and blue needles point to pretty much the same position, perhaps with the red needle occasionally flickering as it falls behind, then suddenly catching up again. Now if you set your WAN emulator to 2% packet loss you will see the effect become much more pronounced: roughly every second or so you will see the same flicker. This is literally what happens when the packet carrying the rotation information is lost: the red needle just sits and waits for the next packet. Imagine how it would look if the app wasn't transferring the needle rotation, but the position of some other player in a multiplayer game - the character would stutter every time it moved to a new position. In adverse conditions you may expect (and therefore should design for) up to 10% packet loss. Try this with your WAN Emulator and you might catch a glimpse of a second phenomenon. Clearly the stuttering effect is more pronounced - but if you look closely, you'll notice that when the needle falls a long way behind, it doesn't actually snap back to the correct position but has to quickly 'wind' forwards again. In the game example this is undesirable for two reasons. First, it's going to look odd to see a character not just stuttering but then positively zooming towards its intended position. Second, if all we want to see is a player character at its current position then we don't care about all those intermeditate positions: we only want the most recent position when the packet is lost and then retransmitted. All information except the most recent is a waste of time and bandwidth. SoftPerfect emulator showing latency Set your packet loss back to zero and we'll look at latency. It's unlikely that in real-world conditions you'll ever get better then about 30ms latency so set your WAN Emulator for that. When you activate the emulation you'll notice the needle drop back quite some way as each endpoint reconfigures itself to the new network speed. Then, the needle will catch up again until it is consistently some distance behind where it should be. In fact the two needles will look rock solid: just slightly apart from each other as they rotate. By setting different amounts of latency, 30ms, 60ms, 90ms, you can practically control how far apart the needles are. Image the computer game again with the player character always some distance behind where they should be. Every time you aim at the player and take a shot you will miss, because every time you line up the shot you're looking at where the player used to be, and not where they are now. The worse the latency, the more apparent the problem. Players with poor internet connections could be, for all purposes, invulnerable! Step 8: Reliability There aren't many quick fixes in life so it's a pleasure to relate the following one. When we looked at packet loss we saw how the needle would noticably wind forwards as it caught up to its intended rotation after a loss of information. The reason for this is that behind the scenes each packet sent had a serial number associated with it that indicated its order. In other words, if the sender were to send out 4 packets... A, B, C, D And if one, lets say 'B' is lost in transmission so that the receiver gets... A, C, D ...the receiving stream can pass 'A' immediately to the app, but then has to inform the sender about this missing packet, wait for it to be received again, then pass 're-transmitted copy of B', 'C', 'D'. The advantage of this system is that messages will always be received in the order they were sent, and that any missing information is filled in automatically. The disadvantage is that the loss of a single packet causes relatively large delays in the transmission. In the computer game example discussed (where we are updating the player character's position in real time), despite not actually wanting to lose information, it's better to just wait for the next packet to come along than to take the time to tell the sender and wait for re-transmission. By the time packet 'B' arrives it will already have been superseded by packets 'C', and 'D', and the data it contains will be stale. As of Flash Player 10.1, a property was added to the NetStream class to control just this kind of behaviour. It is used like this: public function SetRealtime(ns:NetStream):void { ns.dataReliable = false; ns.bufferTime = 0; } Specifically it's the dataReliable property that was added, but for technical reasons it should always be used in conjunction with setting the bufferTime property to zero. If you alter the code to set the sending and receiving streams in this way and run another test on packet loss, you will notice the winding effect disappears. Step 9: Interpolation That's a start, but it still leaves a very jittery needle. The problem is that the position of the receiving needle is entirely at the mercy of the messages received. At even 10% packet loss the vast majority of information is still being received, yet because graphically the app depends so much on a smooth and regular flow of messages, any slight discrepancy shows up immediately. We know how the rotation should look; why not just 'fill in' the missing information to wallpaper over the cracks? We'll start with a class like the following that has two methods, one for updating with the most current rotation, one for reading off the current rotation: public class Msg { public function Write(value:Number, date:Date):void { } public function Read():Number { } } Now the process has been 'decoupled'. Every frame we can call the Read() method and update the shape's rotation. As and when new messages come in we can call the Write() method to update the class with the latest information. We'll also adjust the app so that it receives not just the rotation but the time the rotation was sent. The process of filling in missing values from known ones is called interpolation. Interpolation is a large subject that takes many forms, so we will deal with a subset called Linear Interpolation, or 'Lerping'. Programatically it looks like this: public function Lerp(a:Number, b:Number, x:Number):Number { return a + ((b - a) * x); } A and B are any two values; X is usually a value between zero and one. If X is zero, the method returns A. If X is one, the method returns B. For fractional values between zero and one, the method returns values part way between A and B - so an X value of 0.25 returns a value 25% of the way from A to B. In other words, if at 1:00pm O've driven 5 miles, and at 2:00pm i've driven 60 miles, then at 1:30pm I've driven Lerp(5, 60, 0.5) miles. As it happens I may have sped up, slowed down, and waited in traffic at various parts of the journey, but the interpolation function can't account for that as it only has two values to work from. Therefore the result is a linear approximation and not an exact answer. //Hold 2 recent values to interpolate from. private var valueA:Number = NaN; private var valueB:Number = NaN; //And the instances in time that the values refer to. private var secA:Number = NaN; private var secB:Number = NaN; public function Write(value:Number, date:Date):void { var secC:Number = date.time / 1000.0; //If the new value is reasonably distant from the last //then set a as b, and b as the new value. if( isNaN(secB) || secC -secB > 0.1) { valueA = valueB; secA = secB; valueB = value; secB = secC; } } public function Read():Number { if( isNaN(valueA) ) return valueB; var secC:Number = new Date().time / 1000.0; var x:Number = (secC-secA) / (secB-secA); return Lerp(valueA, valueB, x); } Step 10: So Near and Yet So Far If you implement the code above you'll notice that it almost works correctly but seems to have some sort of glitch - every time the needle does one rotation it appears to then suddenly snap back in the opposite direction. Did we miss something? The documentation for the rotation property of the DisplayObject class reveals the following:. That was naive - we assumed a single number line from which we could pick any two points and interpolate. Instead we're dealing not with a line but with a circle of values. If we go past +180, we wrap around again to -180. That's why the needle was behaving strangely. We still need to interpolate, but we need a form of interpolation that can wrap correctly around a circle. Imagine looking at two separate images of somebody riding a bike. In the first image the pedals are positioned towards the top of the bike; in the second image the pedals are positioned towards the front of the bike. From just these two images and with no additional knowledge it's not possible to work out whether the rider is pedalling forwards or backwards. The pedals could have advanced a quarter of a circle forwards, or three-quarters of a circle backwards. As it happens, in the app we've built, the needles are always 'pedalling' forwards, but we'd like to code for the general case. The standard way to resolve this is to assume that the shortest distance around the circle is the correct direction and also hope that updates come in fast enough so that there is less than half a circle's difference between each update. You may have had the experience playing a multiplayer driving game where another player's car has momentarily rotated in a seemingly impossible way - that's the reason why. var min:Number = -180; var max:Number = +180; //We can 'add' or 'subtract' our way around the circle //giving two different measures of distance var difAdd:Number = (b > a)? b-a : (max-a) + (b-min); var difSub:Number = (b < a)? a-b : (a-min) + (max-b); If 'difAdd' is smaller than 'difSub', we will start at 'a', and add to it a linear interpolation of the amount X. If 'difSub' is the lesser distance, we will start at 'a' and subtract from it a linear interpolation of the amount X. Potentially that might give a value which is out of the 'min' and 'max' range, so we will use some modular arithmetic to get a value which is back in range again. The full set of calculations looks like this: //A function that gives a similar result to the % //mod operator, but for float value. public function Mod(val:Number, div:Number):Number { return (val - Math.floor(val / div) * div); } //Ensures that values out of the min/max range //wrap correctly back in range public function Circle(val:Number, min:Number, max:Number):Number { return Mod(val - min, (max-min) ) + min; } //Performs a circular interpolation of A and B by the factor X, //wrapping at extremes min/max public function CLerp(a:Number, b:Number, x:Number, min:Number, max:Number):Number { var difAdd:Number = (b > a)? b-a : (max-a) + (b-min); var difSub:Number = (b < a)? a-b : (a-min) + (max-b); return (difAdd < difSub)? Circle( a + (difAdd*x), min, max) : Circle( a - (difSub*x), min, max); } If you add this to the code and re-test, you should find the receiver's needle actually looks pretty smooth under a variety of network conditions. The source code attached to this tutorial has several constants which can be changed to re-compile with various combinations of the features we have discussed. Conclusion We began by looking at how to create a Cirrus connection and then set up NetStreams between clients. This was wrapped up into a resusable class that we could test with and expand on. We created an application and examined its performance under different networking conditions using a utility, then looked at techniques to improve the experience for the application user. Finally we discovered that we have to apply these techniques with care and with an understanding of what underlying data the app is representing. I hope this has given you a basic grounding in building real time applications and that you now feel you are equipped to face the issues involved. Good luck! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/building-real-time-web-applications-with-adobe-cirrus--active-10655
CC-MAIN-2020-24
en
refinedweb
#include <Dhcp4.h> Definition at line 373 of file Dhcp4.h. The completion status of transmitting and receiving. Definition at line 377 of file Dhcp4.h. If not NULL, the event that will be signaled when the collection process completes. If NULL, this function will busy-wait until the collection process competes. Definition at line 382 of file Dhcp4.h. The pointer to the server IP address. This address may be a unicast, multicast, or broadcast address. Definition at line 386 of file Dhcp4.h. The server listening port number. If zero, the default server listening port number (67) will be used. Definition at line 390 of file Dhcp4.h. The pointer to the gateway address to override the existing setting. Definition at line 394 of file Dhcp4.h. The number of entries in ListenPoints. If zero, the default station address and port number 68 are used. Definition at line 398 of file Dhcp4.h. An array of station address and port number pairs that are used as receiving filters. The first entry is also used as the source address and source port of the outgoing packet. Definition at line 403 of file Dhcp4.h. The number of seconds to collect responses. Zero is invalid. Definition at line 407 of file Dhcp4.h. The pointer to the packet to be transmitted. Definition at line 411 of file Dhcp4.h. Number of received packets. Definition at line 415 of file Dhcp4.h. The pointer to the allocated list of received packets. Definition at line 419 of file Dhcp4.h.
https://dox.ipxe.org/structEFI__DHCP4__TRANSMIT__RECEIVE__TOKEN.html
CC-MAIN-2020-24
en
refinedweb
Building custom components from Quasar components - digiproduct last edited by digiproduct I am trying to use the following tutorial’s technique to create a new custom component … Re: [How To] Building components with Quasar I am trying to follow the tutorial in the above forum message to create a custom component from a Quasar component just to get the process correct before I move onto slightly more complex components … but I get an error message so I must be doing something wrong … can anyone tell me what I’ve done wrong? The error message is Module not found: Error: Can't resolve 'Pinpad' in 'C:\code\quasar\mytestapp\src\pages' Here’s what I’ve got … In the pages folder of mytestapp, I have the following file called ‘Pinpad.vue’ <template> <div> <q-btn @Click Me</q-btn> </div> </template> <script> import { QBtn } from "quasar"; export default { data() { return { pin: "" }; }, methods: { handleClick() { console.log("Pinpad QBtn clicked"); } }, mounted: function() { console.log("Mounting Pinpad"); }, components: { QBtn } }; </script> and I have another page, called ‘Calendar.vue’ also in the pages, where I want to use this new component, and it is defined as follows:- <template> <q-page> <div class="q-pa-md"> <pinpad></pinpad> </div> </q-page> </template> <style> </style> <script> import pinpad from "Pinpad"; export default { data() { return {}; } }; </script> the actual full error that I get is Failed to compile. ./src/pages/Calendar.vue?vue&type=script&lang=js& (./node_modules/babel-loader/lib??ref--1-0!./node_modules/vue-loader/lib??vue-loader-options!./src/pages/Calendar.vue?vue&type=script&lang=js&) Module not found: Error: Can't resolve 'Pinpad' in 'C:\code\quasar\mytestapp\src\pages' Can anyone point me toward what I am doing wrong? - metalsadman last edited by metalsadman @digiproduct change the import pinpad from "Pinpad"to import pinpad from 'pages/mytestapp/Pinpad'or is your mytestappyour root if so pages/Pinpad? assuming you are using Quasar CLI, otherwise you’ll probably need to point it to the absolute/relative path ./Pinpadwhere your Pinpad.vue file resides. as you can see on the error log, your path is not pointing to the right folder. you can omit importing of quasar component when you are composing in a vue file, since those are already exposed when you add it in quasar.conf.js. @metalsadman … add it in “quasar.conf.js” ? I think that’s the bit I might be missing because I haven’t added anything to quasar.config.js @metalsadman Yes, I am using Quasar CLI I have my component in a file called ‘Pinpad.vue’ which is located inside of the pages folder of my app, which is called ‘mytestapp’ so the physical source file for the component is at ‘C:\code\quasar\mytestapp\src\pages\Pinpad.vue’ - metalsadman last edited by @digiproduct import pinpad from './Pinpad'will do, since you said Calendar.vue and Pinpad.vue is in same folder. @metalsadman That’s got me closer … I’ve got rid of the path error … But … now I get a Vue warning Unknown custom element: <pinpad> - did you register the component correctly? For recursive components, make sure to provide the "name" option. and, I think I have the name correct … because in “Pinpad.vue” I have name: "pinpad", - metalsadman last edited by @digiproduct register it as a component in your Calendar.vue. ie. export default { ... components: { pinpad } ... } @metalsadman PERFECT … many thanks. I had that in Calendar.vue before … I must have removed it when going round in circles. Your help is much appreciated … once again … @metalsadman One further quick question, if I may … What’s the difference between using the ‘pages’ folder and using the ‘components’ folder? I don’t understand why there is a distinction. - metalsadman last edited by metalsadman it’s just folder naming for organization of files. ie. pages contains full layout composing of components, while components contains your custom components that you can re-use in making pages. what i know tho, that they’ve registered these folders so you can use import from 'pages/...or import from 'components/...& etc… without the hassle of using absolute/relative paths whenever you want to point a file in one of those folders, it’s a webpack thing that i haven’t dwell deep in yet. @metalsadman I thought it must be something like that … thanks for the clarification
https://forum.quasar-framework.org/topic/3801/building-custom-components-from-quasar-components/
CC-MAIN-2020-24
en
refinedweb
NOTE: This article has been updated to use the new Sentry SDK Prereqs: - Sentry — Open-source error tracking that helps developers monitor and fix crashes in real time. - @sentry/browser — Sentry SDK for JavaScript - Webpack — Webpack is an open-source JavaScript module bundler. Its main purpose is to bundle JavaScript files for usage in a browser, yet it is also capable of transforming, bundling, or packaging just about any resource or asset. If you’re like us at INTURN, you’ve found great value in having live error tracking integrated with your apps. Not only can you see if there are any issues in your production applications, but you can hook up error tracking to any environment and see when and where an error is thrown. After doing some research, we chose Sentry for our error tracking because of its ability to track in multiple environments, and across multiple releases. When hooking this up to our front-end application, we used React’s Error Boundaries to manage our calls to Sentry with this bit of code: import React from 'react'; import InternalErrorBoundary from 'react-error-boundary'; import * as Sentry from '@sentry/browser';const defaultOnErrorHandler = (error, stack) => { console.error(error, stack); Sentry.captureException(error); };const ErrorBoundary = props => ( <InternalErrorBoundary onError={defaultOnErrorHandler} {...props} /> );export default ErrorBoundary; This way, whenever an Error Boundary is hit, we send a request to Sentry with the error and stack information. While we have a few other custom exceptions, the majority of the errors in our app are reported this way. At the end of the day, it was a simple way to get a ton of information about any errors our users see. After some time using Sentry in our production app, we noticed a couple issues with how our errors were being reported. The first issue was that we had no context in terms of which version of our software was throwing this error. The second, and most important, was that we were getting minified code in our reports, which made it difficult to figure out where the actual issue was in our files. We decided to make these two improvements to our Sentry setup: - Adding versions to our Sentry reporting by using git’s SHA-1 hash - Adding source maps to our error traces Adding versions to Sentry seemed like the low-hanging fruit, so we decided to tackle that first. In setting up Sentry, we had created a sentry.js file that housed our configuration for Sentry (if you need help doing this, see the Sentry docs here). This config accepts a release key, and since we already pass that into our build command as process.env.GIT_SHA1, we added that to our Sentry config. And just like that, errors were attached to a specific git hash! Now for adding source maps. The easiest solution was to build and bundle them with our application, but there are many obvious issues with doing this (security, bundle size, etc.). So we did some digging, and the first thing that we found was sentry-webpack-plugin, which we were excited about. Reading the docs, it seemed like a pretty simple setup to get source maps hooked up to Sentry. Unfortunately, two days later, we still couldn’t figure out how to get it working. We re-read the docs and tried every configuration that might make sense, but still got nothing. We weren’t even seeing files attempting to be uploaded to Sentry (if you had success with this plugin, please let us know what you did!). This is when frustration started to kick in. After no success with the plugin, we decided to try using Sentry’s CLI tool, sentry-cli. The first step was to yarn add -D @sentry/cli. Then, we needed to figure out how to use sentry-cli. After reading the docs here, we were able to access our projects with this command: yarn sentry-cli --auth-token=${SENTRY_API_KEY} projects --org={SENTRY_ORG} list NOTE: If you haven’t created an API Key for Sentry, you can do so by following this link (just make sure to enable the project:write option or you won’t be able to upload your source maps). Now that we were able to access the project, let’s try to create a release using the following command: yarn sentry-cli --auth-token=${SENTRY_API_KEY} releases --org={SENTRY_ORG} --project={SENTRY_PROJECT_NAME} new test-release This creates a new release in Sentry called test-release , which you should be able to see in your Sentry web app. NOTE: If you are using a git hash for your release name, the title of the release will be shown as a short version of that hash. However, actual release name is the full git hash. This threw us for quite some time, but hopefully our pain can save you some. The next step is to upload some source maps to a release, but first, we need some generated source maps to upload. During your build process, you want to make sure that you build source map files. For us, this meant turning the sourceMap option on in UglifyJsPlugin , and adding SourceMapDevToolPlugin to our build with the following configuration (appending the sourceMappingURL is required in Sentry’s docs): new webpack.SourceMapDevToolPlugin({ filename: '[name].[hash].js.map', exclude: ['vendor'], append: '//# sourceMappingURL=[url]', }), Next, you need to upload these files to Sentry with the following command: yarn sentry-cli --auth-token=${SENTRY_API_KEY} releases --org={SENTRY_ORG} --project={SENTRY_PROJECT_NAME} files ${GIT_SHA1} upload-sourcemaps "./dist/*.js.map" --url-prefix "~/dist/" This assumes you’re using the git hash for your release name, but if you’re not, you just need to substitute the GIT_SHA1 with whatever name or variable you decide to use for your release name. Just make sure it matches whatever is set in your app’s sentry.js file. Also, you’ll notice we added a --url-prefix . This was the last big hurdle for us to jump over to get these source maps working. Essentially, Sentry’s source map paths must match the file names exactly. We noticed that since we’re using a dist directory for our assets, we needed to prefix the url when we upload to Sentry. If you get stuck here, just check the names of the source maps in webpack’s output and compare it to the name of the uploaded files in your Sentry release. One final step here is to delete the generated source maps before you put this app into production, so clients don’t see them. We do this in our builds by running rm -rf ./dist/*.js.map after we upload. And that’s it! You should see the source maps uploaded to your release in Sentry, and when you use a version of your app that has the same release name set in its sentry.js file, the source maps will be applied in Sentry.
https://medium.com/inturn-eng/uploading-source-maps-to-sentry-with-webpack-ed29cc82b01d
CC-MAIN-2020-24
en
refinedweb
Eritrea 🇪🇷 Get import and export customs regulation before travelling to Eritrea. Eritrea is part of Africa with main city at Asmara. Its Least Developed country with a population of 5M people. The main currency is Nakfa. The languages spoken are Tigrinya, Arabic and English. 👍 Least Developed 👨👩👦👦 5M people chevron_left import export Useful Information Find other useful infromation when you are travelling to other country like visa details, embasssies, customs, health regulations and so on.
https://visalist.io/eritrea/customs
CC-MAIN-2020-24
en
refinedweb
Class yii\data\DataFilter DataFilter is a special yii\base\Model for processing query filtering specification. It allows validating and building a filter condition passed via request. Filter example: { "or": [ { "and": [ { "name": "some name", }, { "price": "25", } ] }, { "id": {"in": [2, 5, 9]}, "price": { "gt": 10, "lt": 50 } } ] } In the request the filter should be specified using a key name equal to $filterAttributeName. Thus actual HTTP request body will look like following: { "filter": {"or": {...}}, "page": 2, ... } Raw filter value should be assigned to $filter property of the model. You may populate it from request data via load() method: use yii\data\DataFilter; $dataFilter = new DataFilter(); $dataFilter->load(Yii::$app->request->getBodyParams()); In order to function this class requires a search model specified via $searchModel. This search model should declare all available search attributes and their validation rules. For example: class SearchModel extends \yii\base\Model { public $id; public $name; public function rules() { return [ [['id', 'name'], 'trim'], ['id', 'integer'], ['name', 'string'], ]; } } In order to reduce amount of classes, you may use yii\base\DynamicModel instance as a $searchModel. In this case you should specify $searchModel using a PHP callable: function () { return (new \yii\base\DynamicModel(['id' => null, 'name' => null])) ->addRule(['id', 'name'], 'trim') ->addRule('id', 'integer') ->addRule('name', 'string'); } You can use validate() method to check if filter value is valid. If validation fails you can use getErrors() to get actual error messages. In order to acquire filter condition suitable for fetching data use build() method. Note: This is a base class. Its implementation of build() simply returns normalized $filter value. In order to convert filter to particular format you should use descendant of this class that implements buildInternal() method accordingly. See also yii\data\ActiveDataFilter. Public Properties Hide inherited properties Public Methods Protected Methods Events Constants Property Details Actual attribute names to be used in searched condition, in format: [filterAttribute => actualAttribute]. For example, in case of using table joins in the search query, attribute map may look like the following: [ 'authorName' => '{{author}}.[[name]]' ] Attribute map will be applied to filter condition in normalize() method. Maps filter condition keywords to validation methods. These methods are used by validateCondition() to validate raw filter conditions. Error messages in format [errorKey => message]. public void setErrorMessages ( $errorMessages ) Raw filter value. Label for the filter attribute specified via $filterAttributeName. It will be used during error messages composition. Keywords or expressions that could be used in a filter. Array keys are the expressions used in raw filter value obtained from user request. Array values are internal build keys used in this class methods. Any unspecified keyword will not be recognized as a filter control and will be treated as an attribute name. Thus you should avoid conflicts between control keywords and attribute names. For example: in case you have control keyword 'like' and an attribute named 'like', specifying condition for such attribute will be impossible. You may specify several keywords for the same filter build key, creating multiple aliases. For example: [ 'eq' => '=', '=' => '=', '==' => '=', '===' => '=', // ... ] Note: while specifying filter controls take actual data exchange format, which your API uses, in mind. Make sure each specified control keyword is valid for the format. For example, in XML tag name can start only with a letter character, thus controls like >, '=' or $gtwill break the XML schema. List of operators keywords, which should accept multiple values. Specifies the list of supported search attribute types per each operator. This field should be in format: 'operatorKeyword' => ['type1', 'type2' ...]. Supported types list can be specified as *, which indicates that operator supports all types available. Any unspecified keyword will not be considered as a valid operator.->property).. Normalizes filter value, replacing raw keys according to $filterControls and $attributeMap. Parses content of the message from $errorMessages, specified by message key. - attribute list: required, specifies the attributes array to be validated, for single attribute you can pass a string; - validator type: required, specifies the validator to be used. It can be a built-in validator name, a method name of the model class, an anonymous function, or a validator class name. - on: optional, specifies the scenarios array in which the validation rule can be applied. If this option is not set, the rule will apply to all scenarios. - additional name-value pairs can be specified to initialize the corresponding validator properties. Please refer to individual validator class API for possible properties. A validator can be either an object of a class extending yii\validators built-in valid(). Sets the list of error messages responding to invalid filter structure, in format: [errorKey => message]. Message may contain placeholders that will be populated depending on the message context. For each message a {filter} placeholder is available referring to the label for $filterAttributeName attribute. Validates search condition for a particular attribute. Validates attribute value in the scope of \yii\data\model. Validates block condition that consists of a single condition. This covers such operators as not. Validates filter condition. Validates conjunction condition that consists of multiple independent ones. This covers such operators as and and or. Validates filter attribute value to match filer condition specification. Validates operator condition.
https://www.yiiframework.com/doc/api/2.0/yii-data-datafilter
CC-MAIN-2020-29
en
refinedweb
I've tried to use an exact script from a tutorial by uniy(link:) the code : static var score : int; private var text : Text; function Awake () { text = GetComponent (Text); score = 0; } function Update () { text.text = "Score: " + score; For what i know everything was correct but it is giving me this error : Assets/Scripts/ScoreManager.js(3,20): BCE0018: The name 'Text' does not denote a valid type ('not found'). Did you mean 'UnityEngine.UI.Text'? please help me. Thank you in advance Where it says GetComponent (Text); change it to GetComponent ("Text"); For all the other instances of GetComponent, you will need speech marks there too Answer by Qasem2014 · Dec 28, 2014 at 01:19 PM do you use this code in your script ? import UnityEngine.UI; Thanks that helped. @Qasem2014 where did you put the import UnityEngine.UI; in your code? When I added it into the same code @eshwar was using, it gave me a new error. BCE0044: expecting EOF, found . [4.6 UI] Show text as an int variable error 2 Answers Anyone know how to setup text localization with Button icons within the text? 2 Answers Graphic problem with UI Text 2 Answers Measure text size for UI Text 1 Answer Which type of Variable do I need to use to change the Value of a text UI object ? 0 Answers
https://answers.unity.com/questions/863823/unable-to-reference-a-component.html
CC-MAIN-2020-29
en
refinedweb
Pandas ExtensionDType/Array backed by Apache Arrow Project description fletcher A library that provides a generic set of Pandas ExtensionDType/Array implementations backed by Apache Arrow. They support a wider range of types than Pandas natively supports and also bring a different set of constraints and behaviours that are beneficial in many situations. Usage To use fletcher in Pandas DataFrames, all you need to do is to wrap your data in a FletcherChunkedArray object. Your data can be of either pyarrow.Array, pyarrow.ChunkedArray or a type that can be passed to pyarrow.array(…). import fletcher as fr import pandas as pd df = pd.DataFrame({ 'str': fr.FletcherChunkedArray(['a', 'b', 'c']) }) df.info() # RangeIndex: 3 entries, 0 to 2 # Data columns (total 1 columns): # str 3 non-null fletcher[string] # dtypes: fletcher[string](1) # memory usage: 100.0 bytes Development While you can use fletcher in pip-based environments, we strongly recommend using a conda based development setup with packages from conda-forge. # Create the conda environment with all necessary dependencies conda create -y -q -n fletcher python=3.6 \ pre-commit \ asv \ numba \ pandas \ pip \ pyarrow \ pytest \ pytest-cov \ six \ -c conda-forge # Activate the newly created environment source activate fletcher # Install fletcher into the current environment pip install -e . # Run the unit tests (you should do this several times during development) py.test # can be automatically adjusted using black .. Note that we have pinned the version of black to ensure that the formatting is reproducible. Benchmarks. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/fletcher/
CC-MAIN-2020-29
en
refinedweb
Components and supplies Necessary tools and machines Apps and online services About this project This project shows how to interface the GY-90615 IR temperature sensor with the Arduino UNO. Recently, I had to use the sensor for monitoring coffee temperature and from all the things I learnt about the sensor, I felt it necessary to share this knowledge with everyone else in a simplified manner. I initially bought the GY-90615 package but as I started to solder the connectors onto it, the resistors fell off and there was no way for me to solder them back on. So, I used the MLX90615 sensor on the unit by itself and connected it in a different way which is shown in the second schematic. For the resistors, I used two 4.7kohm each and a 0.1uF Capacitor. Feel free to watch my YouTube video for the tutorial at: Shown are two schematics, one for the GY-90615 module and one for the MLX90615 sensor by itself. Follow the wiring on these schematics and use the code below to run it. The link for the necessary files for the sensor is below: Code MLX90615 codeC/C++ #include <mlx90615.h> #include <Wire.h> MLX90615 mlx = MLX90615(); void setup() { //setup MLX IR sensor mlx.begin(); // initialize serial communication: Serial.begin(9600); } void loop() { Serial.print("Ambient = "); Serial.print(mlx.get_ambient_temp()); Serial.print(" *C\tObject = "); Serial.print(mlx.get_object_temp()); Serial.println(" *C"); } Schematics Author engineeringuofs - 1 project - 0 followers Additional contributors Published onApril 10, 2020 Members who respect this project you might like
https://create.arduino.cc/projecthub/engineeringuofs/interfacing-ir-temperature-sensor-with-arduino-uno-279b70
CC-MAIN-2020-29
en
refinedweb
ocol Implementations Below you can find a list of a few protocol implementations that are done with SwiftNIO. This is a non-exhaustive list of protocols that are either part of the SwiftNIO project or are accepted into the SSWG's incubation process. All of the libraries listed below do all of their I/O in a non-blocking fashion using SwiftNIO. Low-level protocol implementationsLow-level protocol implementations Low-level protocol implementations are often a collection of ChannelHandlers that implement a protocol but still require the user to have a good understanding of SwiftNIO. Often, low-level protocol implementations will then be wrapped in high-level libraries with a nicer, more user-friendly API. High-level Platforms SwiftNIO aims to support all of the platforms where Swift is supported. Currently, it is developed and tested on macOS and Linux, and is known to support the following operating system versions: - Ubuntu 14.04+ - macOS 10.12+; (macOS 10.14+, iOS 12+, or tvOS 12+ with swift-nio-transport-services) Swift versionsSwift versions SwiftNIO 1SwiftNIO 1 The latest released SwiftNIO 1 version supports Swift 4.0, 4.1, 4.2, and 5.0. SwiftNIO 2SwiftNIO 2 The latest released SwiftNIO 2 version supports only Swift 5.0, 5.1, and 5.2. If you have a SwiftNIO 1 application or library that you would like to migrate to SwiftNIO 2, please check out the migration guide we prepared for you. CompatibilityCompatibility SwiftNIO follows SemVer 2.0.0 with a separate document declaring SwiftNIO's Public API. What this means for you is that you should depend on SwiftNIO with a version range that covers everything from the minimum SwiftNIO version you require up to the next major version. In SwiftPM that can be easily done specifying for example from: "2.0.0" meaning that you support SwiftNIO in every version starting from 2.0.0 up to (excluding) 3.0.0. SemVer and SwiftNIO's Public API guarantees should result in a working program without having to worry about testing every single version for compatibility. Conceptual OverviewConceptual Overview. SwiftNIO does not aim to provide high-level solutions like, for example, web frameworks do. Instead, SwiftNIO is focused on providing the low-level building blocks for these higher-level applications. When it comes to building a web application, most users will not want to use SwiftNIO directly: instead, they'll want to use one of the many great web frameworks available in the Swift ecosystem. Those web frameworks, however, may choose to use SwiftNIO under the covers to provide their networking support. The following sections will describe the low-level tools that SwiftNIO provides, and provide a quick overview of how to work with them. If you feel comfortable with these concepts, then you can skip right ahead to the other sections of this README. Basic ArchitectureBasic Architecture The basic building blocks of SwiftNIO are the following 8 types of objects: EventLoopGroup, a protocol EventLoop, a protocol Channel, a protocol ChannelHandler, a protocol Bootstrap, several related structures ByteBuffer, a struct EventLoopFuture, a generic class EventLoopPromise, a generic struct. All SwiftNIO applications are ultimately constructed of these various components. EventLoops and EventLoopGroupsEventLoops and EventLoopGroups The basic I/O primitive of SwiftNIO is the event loop. The event loop is an object that waits for events (usually I/O related events, such as "data received") to happen and then fires some kind of callback when they do. In almost all SwiftNIO applications there will be relatively few event loops: usually only one or two per CPU core the application wants to use. Generally speaking event loops run for the entire lifetime of your application, spinning in an endless loop dispatching events. Event loops are gathered together into event loop groups. These groups provide a mechanism to distribute work around the event loops. For example, when listening for inbound connections the listening socket will be registered on one event loop. However, we don't want all connections that are accepted on that listening socket to be registered with the same event loop, as that would potentially overload one event loop while leaving the others empty. For that reason, the event loop group provides the ability to spread load across multiple event loops. In SwiftNIO today there is one EventLoopGroup implementation, and two EventLoop implementations. For production applications there is the MultiThreadedEventLoopGroup, an EventLoopGroup that creates a number of threads (using the POSIX pthreads library) and places one SelectableEventLoop on each one. The SelectableEventLoop is an event loop that uses a selector (either kqueue or epoll depending on the target system) to manage I/O events from file descriptors and to dispatch work. Additionally, there is the EmbeddedEventLoop, which is a dummy event loop that is used primarily for testing purposes. EventLoops have a number of important properties. Most vitally, they are the way all work gets done in SwiftNIO applications. In order to ensure thread-safety, any work that wants to be done on almost any of the other objects in SwiftNIO must be dispatched via an EventLoop. EventLoop objects own almost all the other objects in a SwiftNIO application, and understanding their execution model is critical for building high-performance SwiftNIO applications. Channels, Channel Handlers, Channel Pipelines, and Channel ContextsChannels, Channel Handlers, Channel Pipelines, and Channel Contexts While EventLoops are critical to the way SwiftNIO works, most users will not interact with them substantially beyond asking them to create EventLoopPromises and to schedule work. The parts of a SwiftNIO application most users will spend the most time interacting with are Channels and ChannelHandlers.. Channels by themselves, however, are not useful. After all, it is a rare application that doesn't want to do anything with the data it sends or receives on a socket! So the other important part of the Channel is the ChannelPipeline. A ChannelPipeline is a sequence of objects, called ChannelHandlers, that process events on a Channel. The ChannelHandlers process these events one after another, in order, mutating and transforming events as they go. This can be thought of as a data processing pipeline; hence the name ChannelPipeline. All ChannelHandlers are either Inbound or Outbound handlers, or both. Inbound handlers process "inbound" events: events like reading data from a socket, reading socket close, or other kinds of events initiated by remote peers. Outbound handlers process "outbound" events, such as writes, connection attempts, and local socket closes. Each handler processes the events in order. For example, read events are passed from the front of the pipeline to the back, one handler at a time, while write events are passed from the back of the pipeline to the front. Each handler may, at any time, generate either inbound or outbound events that will be sent to the next handler in whichever direction is appropriate. This allows handlers to split up reads, coalesce writes, delay connection attempts, and generally perform arbitrary transformations of events. In general, ChannelHandlers are designed to be highly re-usable components. This means they tend to be designed to be as small as possible, performing one specific data transformation. This allows handlers to be composed together in novel and flexible ways, which helps with code reuse and encapsulation. ChannelHandlers are able to keep track of where they are in a ChannelPipeline by using a ChannelHandlerContext. These objects contain references to the previous and next channel handler in the pipeline, ensuring that it is always possible for a ChannelHandler to emit events while it remains in a pipeline. SwiftNIO ships with many ChannelHandlers built in that provide useful functionality, such as HTTP parsing. In addition, high-performance applications will want to provide as much of their logic as possible in ChannelHandlers, as it helps avoid problems with context switching. Additionally, SwiftNIO ships with a few Channel implementations. In particular, it ships with ServerSocketChannel, a Channel for sockets that accept inbound connections; SocketChannel, a Channel for TCP connections; DatagramChannel, a Channel for UDP sockets; and EmbeddedChannel, a Channel primarily used for testing. A Note on BlockingA Note on Blocking One of the important notes about ChannelPipelines is that they are thread-safe. This is very important for writing SwiftNIO applications, as it allows you to write much simpler ChannelHandlers in the knowledge that they will not require synchronization. However, this is achieved by dispatching all code on the ChannelPipeline on the same thread as the EventLoop. This means that, as a general rule, ChannelHandlers must not call blocking code without dispatching it to a background thread. If a ChannelHandler blocks for any reason, all Channels attached to the parent EventLoop will be unable to progress until the blocking call completes. This is a common concern while writing SwiftNIO applications. If it is useful to write code in a blocking style, it is highly recommended that you dispatch work to a different thread when you're done with it in your pipeline. BootstrapBootstrap While it is possible to configure and register Channels with EventLoops directly, it is generally more useful to have a higher-level abstraction to handle this work. For this reason, SwiftNIO ships a number of Bootstrap objects whose purpose is to streamline the creation of channels. Some Bootstrap objects also provide other functionality, such as support for Happy Eyeballs for making TCP connection attempts. Currently SwiftNIO ships with three Bootstrap objects: ServerBootstrap, for bootstrapping listening channels; ClientBootstrap, for bootstrapping client TCP channels; and DatagramBootstrap for bootstrapping UDP channels. ByteBufferByteBuffer The majority of the work in a SwiftNIO application involves shuffling buffers of bytes around. At the very least, data is sent and received to and from the network in the form of buffers of bytes. For this reason it's very important to have a high-performance data structure that is optimized for the kind of work SwiftNIO applications perform. For this reason, SwiftNIO provides ByteBuffer, a fast copy-on-write byte buffer that forms a key building block of most SwiftNIO applications. ByteBuffer provides a number of useful features, and in addition provides a number of hooks to use it in an "unsafe" mode. This turns off bounds checking for improved performance, at the cost of potentially opening your application up to memory correctness problems. In general, it is highly recommended that you use the ByteBuffer in its safe mode at all times. For more details on the API of ByteBuffer, please see our API documentation, linked below. Promises and FuturesPromises and Futures One major difference between writing concurrent code and writing synchronous code is that not all actions will complete immediately. For example, when you write data on a channel, it is possible that the event loop will not be able to immediately flush that write out to the network. For this reason, SwiftNIO provides EventLoopPromise<T> and EventLoopFuture<T> to manage operations that complete asynchronously. An EventLoopFuture<T> is essentially a container for the return value of a function that will be populated at some time in the future. Each EventLoopFuture<T> has a corresponding EventLoopPromise<T>, which is the object that the result will be put into. When the promise is succeeded, the future will be fulfilled. If you had to poll the future to detect when it completed that would be quite inefficient, so EventLoopFuture<T> is designed to have managed callbacks. Essentially, you can hang callbacks off the future that will be executed when a result is available. The EventLoopFuture<T> will even carefully arrange the scheduling to ensure that these callbacks always execute on the event loop that initially created the promise, which helps ensure that you don't need too much synchronization around EventLoopFuture<T> callbacks. Another important topic for consideration is the difference between how the promise passed to close works as opposed to closeFuture on a Channel. For example, the promise passed into close will succeed after the Channel is closed down but before the ChannelPipeline is completely cleared out. This will allow you to take action on the ChannelPipeline before it is completely cleared out, if needed. If it is desired to wait for the Channel to close down and the ChannelPipeline to be cleared out without any further action, then the better option would be to wait for the closeFuture to succeed. There are several functions for applying callbacks to EventLoopFuture<T>, depending on how and when you want them to execute. Details of these functions is left to the API documentation. Design PhilosophyDesign Philosophy SwiftNIO is designed to be a powerful tool for building networked applications and frameworks, but it is not intended to be the perfect solution for all levels of abstraction.. Applications that need extremely high performance from their networking stack may choose to use SwiftNIO directly in order to reduce the overhead of their abstractions. These applications should be able to maintain extremely high performance with relatively little maintenance cost. SwiftNIO also focuses on providing useful abstractions for this use-case, such that extremely high performance network servers can be built directly. The core SwiftNIO repository will contain a few extremely important protocol implementations, such as HTTP, directly in tree. However, we believe that most protocol implementations should be decoupled from the release cycle of the underlying networking stack, as the release cadence is likely to be very different (either much faster or much slower). For this reason, we actively encourage the community to develop and maintain their protocol implementations out-of-tree. Indeed, some first-party SwiftNIO protocol implementations, including our TLS and HTTP/2 bindings, are developed out-of-tree! DocumentationDocument Started SwiftNIO primarily uses SwiftPM as its build tool, so we recommend using that as well. If you want to depend on SwiftNIO in your own project, it's as simple as adding a dependencies clause to your Package.swift: dependencies: [ .package(url: "", from: "2.0.0") ] and then adding the appropriate SwiftNIO module(s) to your target dependencies. click Next twice. Finally, select the targets you are planning to use (for example NIO, NIOHTTP1, and NIOFoundationCompat) and click finish. Now will be able to import NIO (as well as all the other targets you have selected) in your project. To work on SwiftNIO itself, or to investigate some of the demonstration applications, you can clone the repository directly and use SwiftPM to help build it. For example, you can run the following commands to compile and run the example echo server: swift build swift test swift run NIOEchoServer To verify that it is working, you can use another shell to attempt to connect to it: echo "Hello SwiftNIO" | nc localhost 9999 If all goes well, you'll see the message echoed back to you. To work on SwiftNIO in Xcode 11+, you can just open the Package.swift file in Xcode and use Xcode's support for SwiftPM Packages. If you want to develop SwiftNIO with Xcode 10, you have to generate an Xcode project: swift package generate-xcodeproj An alternative: using docker-compose Alternatively, you may want to develop or test with docker-compose. First make sure you have Docker installed, next run the following commands: docker-compose -f docker/docker-compose.yaml run test Will create a base image with Swift runtime and other build and test dependencies, compile SwiftNIO and run the unit and integration tests docker-compose -f docker/docker-compose.yaml up echo Will create a base image, compile SwiftNIO, and run a sample NIOEchoServeron localhost:9999. Test it by echo Hello SwiftNIO | nc localhost 9999. docker-compose -f docker/docker-compose.yaml up http Will create a base image, compile SwiftNIO, and run a sample NIOHTTP1Serveron localhost:8888. Test it by curl Developing SwiftNIODeveloping SwiftNIO Note: This section is only relevant if you would like to develop SwiftNIO yourself. You can ignore the information here if you just want to use SwiftNIO as a SwiftPM package. For the most part, SwiftNIO development is as straightforward as any other SwiftPM project. With that said, we do have a few processes that are worth understanding before you contribute. For details, please see CONTRIBUTING.md in this repository. PrerequisitesPrerequisites SwiftNIO's master branch is the development branch for the next releases of SwiftNIO 2, it's Swift 5-only. To be able to compile and run SwiftNIO and the integration tests, you need to have a few prerequisites installed on your system. macOSmac
https://libraries.io/cocoapods/CNIODarwin
CC-MAIN-2020-29
en
refinedweb
Burrow Burrow is a permissioned Ethereum smart-contract blockchain node which provides transaction finality and high transaction throughput on a proof-of-stake Tendermint consensus engine. Introduction This chart bootstraps a burrow network on a Kubernetes cluster using the Helm package manager. Installation Prerequisites To deploy a new blockchain network, this chart requires that two objects be present in the same Kubernetes namespace: a configmap should house the genesis file and each node should have a secret to hold any validator keys. The provided script, addresses.sh automatically provisions a number of files using the burrow toolkit, so please first ensure that burrow --version matches the image.tag in the configuration. This sequence also requires that the jq binary is installed. Two files will be generated, the first of note is setup.yaml which contains the two necessary Kubernetes specifications to be added to the cluster: curl -LO CHAIN_NODES=4 CHAIN_NAME="my-release-burrow" ./initialize.sh kubectl apply --filename setup.yaml Please note that the variable $CHAIN_NAME should be the same as the helm release name specified below with the -burrow suffix. Another file, addresses.yaml contains the the equivalent validator addresses to set in the charts. Deployment To install the chart with the release name my-release with the set of custom validator addresses: helm install <helm-repo>/burrow --name my-release --values addresses.yaml The configuration section below lists all possible parameters that can be configured during installation. Please also see the runtime configuration section for more information on how to setup your network properly. Uninstall To uninstall/delete the my-release deployment: $ helm delete my-release This command removes all the Kubernetes components associated with the chart and deletes the release. To remove the configmap and secret created in the prerequisites, follow these steps: kubectl delete secret ${CHAIN_NAME}-keys kubectl delete configmap ${CHAIN_NAME}-genesis Configuration The following table lists the configurable parameters of the Burrow chart and its default values. Specify each parameter using the --set key=value[,key=value] argument to helm install. For example, helm install <helm-repo>/burrow --name my-release \ --set=image.tag=0.23.2,resources.limits.cpu=200m -f addresses.yaml Alternatively, append additional values to the YAML file generated in the prerequisites. For example, helm install <helm-repo>/burrow --name my-release -f addresses.yaml Runtime It is unlikely that you will want to deploy this chart with the default runtime configuration. When booting permissioned blockchains in a cloud environment there are three predominant considerations in addition to the normal configuration of any cloud application. - What access rights to place on the ports? - What is the set of initial accounts and validators for the chain? - What keys should the validating nodes have? Each of these considerations will be dealt with in more detail below. Port Configuration Burrow utilizes three different ports by default: Peer: Burrow's peer port is used for P2P communication within the blockchain network as part of the consensus engine (Tendermint) to perform bilateral gossiping communication. Info: Burrow's info port is used for conducting remote procedures. GRPC: Burrow's grpc port can be used by JavaScript libraries to interact with the chain over websockets. The default configuration for the chart sets up the port access rights in the following manner: Peer: Peer ports are only opened within the cluster. By default, there is no P2P communication exposed to the general internet. Each node within the cluster has its own distinct peer service built by the chart which utilizes a ClusterIPservice type. Info: The info port is only opened within the cluster. By default, there is no info communication exposed to the general internet. There is one info service built by the chart which utilizes a ClusterIPservice type. The default info service used by the chart is strongly linked to node number 000and is not load balanced across the nodes by default so as to reduce any challenges with tooling that conduct long-polling after sending transactions. The chart offers an ingress which is connected to the info service, but this is disabledby default. GRPC: The grpc port is only opened within the cluster. By default, there is no grpc communication exposed to the general internet. There is one grpc service built by the chart which utilizes a ClusterIPservice type. The default grpc service used by the chart is load balanced across the nodes within the cluster by default because libraries which utilize this port typical do so on websockets and the service is able to utilize a sessionAffinity setting. In order to expose the peers to the general internet change the peer.service.type to NodePort. It is not advised to run P2P traffic through an ingress or other load balancing service as there is uncertainty with respect to the IP address which the blockchain node advertises and gossips. As such, the best way to expose P2P traffic to the internet is to utilize a NodePort service type. While such service types can be a challenge to work with in many instances, the P2P libraries that these blockchains utilize are very resilient to movement between machine nodes. The biggest gotcha with NodePort service types is to ensure that the machine nodes have proper egress within the cloud or data center provider. As long as the machine nodes do not have egress restrictions disabling the utilization of NodePort service types, the P2P traffic will be exposed fluidly. To expose the info service to the general internet change the default rpcInfo.ingress.enabled to true and add the appropriate fields to the ingress for your Kubernetes cluster. This will allow developers to connect to the info service from their local machines. To disable load balancing on the grpc service, change the rpcGRPC.service.loadBalance to false. Genesis Burrow initializes any single blockchain via use of a genesis.json which defines what validators and accounts are given access to the permissioned blockchain when it is booted. The chart imports the genesis.json file as a Kubernetes configmap and then mounts it in each node deployment. Validator Keys NOTE: The chart has not been security audited and as such one should use the validator keys functionality of the chart at one's own risk. Burrow blockchain nodes need to have a key available to them which has been properly registered within the genesis.json initial state. The registered key is what enables a blockchain node to participate in the P2P validation of the network. The chart imports the validator key files as Kubernetes secrets, so the security of the blockchain is only as strong as the cluster's integrity.
http://developer.aliyun.com/hub/detail?name=burrow&version=1.5.2
CC-MAIN-2020-29
en
refinedweb
To perform an action in a React component after calling setState, such as making an AJAX request or throwing an error, we use the setState callback. Here’s something extremely important to know about state in React: updating a React component’s state is asynchronous. It does not happen immediately. Therefore you will run into scenarios whereby parts of your code run before state has had a chance to update. To solve this specific React issue, we can use the setState function’s callback. Whatever we pass into setState’s second argument executes after the setState function updates. setState Callback in a Class Component Let’s see how to perform a callback inside a React class component after setState executes: import React, { Component } from 'react'; class App extends Component { constructor(props) { super(props); this.state = { age: 0, }; } // this.checkAge is passed as the callback to setState updateAge = (value) => { this.setState({ age: value}, this.checkAge); }; checkAge = () => { const { age } = this.state; if (age !== 0 && age >= 21) { // Make API call to /beer } else { // Throw error 404, beer not found } }; render() { const { age } = this.state; return ( <div> <p>Drinking Age Checker</p> <input type="number" value={age} onChange={e => this.updateAge(e.target.value)} /> </div> ); } } export default App; This nifty drinking age checker component displays a single input. After changing the value inside that input, it changes the age value inside of its state. Focus in on the checkAge function. That’s where the setState function gets called. Look at the second argument inside that setState function: it’s calling checkAge. That’s the callback function that will be executed after the age state value is updated. What we’re essentially doing is waiting until age has fully updated in state to then make the call to check age. If we didn’t wait, we might be checking an older age value. Cheers! 🍺 setState Callback in a Functional Component React 16.8 introduced Hooks which gave us a way to add state to functional components through the useState Hook. However, the useState Hook does not have a second callback argument. Instead, we use the useEffect Hook and its second argument, which is an array of dependencies. Let’s take a look at the same example above, but this time in the context of a functional component that uses the useState and useEffect Hooks: import React, { useEffect, useState } from 'react'; function App() { const [age, setAge] = useState(0); updateAge(value) { setAge(value); }; useEffect(() => { if (age !== 0 && age >= 21) { // Make API call to /beer } else { // Throw error 404, beer not found } }, [age]); return ( <div> <p>Drinking Age Checker</p> <input type="number" value={age} onChange={e => setAge(e.target.value)} /> </div> ); } export default App; If you haven’t seen Hooks before, why not check out my Simple Introduction to React Hooks. Much of this component is the same as the Class component, with one vital difference: the useEffect function. Let’s break it down line-by-line: useEffect(() => { if (age !== 0 && age >= 21) { // Make API call to /beer } else { // Throw error 404, beer not found } }, [age]); Starting from the bottom, we see parentheses with the age state variable inside of them. This is what’s called the dependency array, and it tells this particular useEffect function to listen out for any changes to the age state variable. Once the age state variable changes, this useEffect function executes. It’s the equivalent of the setState callback function inside of React class components in our first example. You can have multiple useEffect functions in a single component. 💻 More React Tutorials Really nice James, this article’s each line have extremely best information. Nice ! In functional component example onChange calls directly setAge and so IMHO updateAge seems to be useless (unless you want point out the parallelism with class example). Thanks for all. I am a web software engineer, and i want learn more about react.Thank you
https://upmostly.com/tutorials/how-to-use-the-setstate-callback-in-react
CC-MAIN-2020-29
en
refinedweb
Documentation Pushing a conform to Shotgun Sending batch renders to Review Want to learn more? Advanced Topics Using export presets Bypassing Shotgun server side transcoding Customizing ffmpeg Copying the settings hook Modifying the hook Installation, Updates and Development Configuration Options Release Notes History The Flame shot exporter makes it easy to kickstart your Shotgun project! Simply select a Sequence to export inside of Flame, and the exporter will take care of the rest. It will create shots, tasks, set up cut information in Shotgun, generate folders on disk, render out plates to disk and send the media to Shotgun Review. Once done, you can jump straight into other tools such as Flare or Nuke to continue your work there. Documentation The Shotgun Flame Export App helps kickstart your project! Once you have created an initial conform in Flame, the Shot Exporter can help you quickly generate content in Shotgun, render out plates to disk and send content to review. Once you are up and running, the exporter app will also track all the renders happening in Flare or in Flame batch mode, making it easy to send content to review as part of your workflow. Pushing a conform to Shotgun Once you have your conform set up in Flame for a sequence, and have allocated shot names to all the segments in your timeline, select the sequence, right click and choose the Shotgun Shot Export option. This will bring up a Shotgun UI where you can enter some initial comments for your publish. These comments will be sent to review and also used when adding description to publishes and other content. In addition to the description, you can also select which output data format you want to use for your exported plates. These presets are part of the toolkit app configuration and can be configured to suit the needs of your studio. Once you click the submit button, a number of things will happen straight away: Shots and Tasks will be created in Shotgun. The list of tasks to associate with each new Shot that gets created is configurable via a Task template setting to make it quick and painless to create consistent structures. The shots will be parented under a sequence by default, but this is also configurable and if you are working with Scenes or Episodes, it is possible to reconfigure the exporter to work with these instead. Once Shotgun contains the right data, folders will be created on disk using the standard folder creation mechanism. This ensures that the project can be kick-started with a set of consistent folders for all shots that are being created. Once the two steps above have been carried out, you have the basic structure to proceed with further steps. These will happen in the background: Plates will be exported on disk for each shot according to the presets defined in your configuation. File locations are defined using the Toolkit Template system, meaning that the location of these plates will be well defined and understood by other tools downstream in the pipeline. Batch files and clip xml files will be exported. These are used by Flame to enable an iterative workflow where you can quickly render out new versions that are later pulled in to the main conform in Flame. Quicktimes are generated and uploaded to Shotgun for review. Sending batch renders to Review Once you have published a Flame batch file for a Shot, you can launch Flare directly from that Shot in order to open up the batch file with render and output settings pre-populated. In order to render out a new version, simply click the Render Range Button. Toolkit will display a dialog at this point where you can choose to send the render to Shotgun review or not. Files will be published and tracked by Shotgun and optionally also sent to review. Want to learn more? If you want to learn more, and see this workflow in action, head over to the Flame engine documentation where we also have some video content that demonstrates the various workflows in action. As always, if you have question regarding integration or customization, don't hesitate to reach out to our Support: toolkitsupport@shotgunsoftware.com Advanced Topics Below you'll find more advanced details relating to configuration and customization Using export presets The exporter uses a concept of Export Presets in its configuration. When you launch the Export UI inside of Flame you see a dropdown with the available export presets. Each preset is a configuration option which allows you to configure how files are written to disk and uploaded to Shotgun. High level settings such as file locations on disk are controlled directly in the environment configuration, making it easy to adjust the default configuration options to work with your pipeline. More advanced settings and control over the actual export xml content that is being passed to flame in order to control Flame, is handled by a hook where the behaviour is defined for each preset. In the hook, you have complete control over how media is being generated by the exporter. Bypassing Shotgun server side transcoding By default, Quicktimes are uploaded to Shotgun review by setting the Version.sg_uploaded_movie field. This in turn will trigger Shotgun server side transcoding; the uploaded quicktime will be further converted to mp4 and webm representations tailored for playback in browsers and mobile. Sometimes, it can be beneficial to bypass this server side transcoding. This is possible by setting the bypass_shotgun_transcoding configuration setting. When this is set to true, the integration will upload directly to the Version.sg_uploaded_movie_mp4 field in Shotgun, thereby bypassing the server side transcoding. In this case, no webm version is generated, so review playback will not be possible in Firefox. For more information, see Customizing ffmpeg When the exporter generates quicktimes, it uses a version of ffmpeg which comes distributed with Flame. By modifying the settings hook in the exporter, you can specify an external version of ffmpeg to use instead of the built-in one. The version of ffmpeg distributed with Flame is tracking the very latest advancements in ffmpeg transcoding and performance, so sometimes using the latest version may result in performance improvements. Please note that the way h264 parameters are passed to ffmpeg has changed between the version that is used by default and the latest versions. By switching to the latest generation of ffmpeg, it is possible to implement exactly the recommended transcoding guidelines that results in optimal upload and performance on the Shotgun side. You can find these guidelines here: We only recommend changing the ffmpeg version if you are an advanced user. In that case, follow these steps: Copying the settings hook All settings that need to be modified can be found in the settings hook that comes shipped with the Flame export app. In order to modify this hook, you first need to copy this hook file from its default location inside the app location into your configuration. Inside your project configuration, you'll typically find the hook file in a location similar to install/apps/app_store/tk-flame-export/va.b.c/hooks/settings.py. Copy this file into the hooks location inside of your configuration, e.g. config/hooks. We recommend renaming it to something a little more verbose than just settings.py in order to make it clear what it is: install/apps/app_store/tk-flame-export/va.b.c/hooks/settings.py -> config/hooks/flame_export_settings.py Now edit your Flame environment configuration file. This is typically config/env/includes/flame.yml. Under the tk-flame-export heading, you'll find the path to the hook being defined as settings_hook: '{self}/settings.py'. This essentially means that the configuration will look for the hook file inside the app location (e.g. {self}). Changing this to settings_hook: '{config}/flame_export_settings.py' will tell Toolkit to look for the hook file inside the configuration instad. In summary: settings_hook: '{self}/settings.py' -> '{config}/flame_export_settings.py' Modifying the hook Now we are ready to start modifying our config/hooks/flame_export_settings.py hook! Open it up in a text editor. You'll notice several methods relating to ffmpeg and ffmpeg settings. The first one to modify is the following: def get_external_ffmpeg_location(self): """ Control which version of ffmpeg you want to use when doing transcoding. By default, this hook returns None, indicating that the app should use the built-in version of ffmpeg that comes with Flame. If you want to use a different version of ffmpeg, simply return the path to the ffmpeg binary here. :returns: path to ffmpeg as str, or None if the default should be used. """ return None By returning None by default, the exporter will use Flame's built-in ffmpeg. Change this to return a full path to your ffmpeg. Keep in mind that if you are running a backburner cluster, ffmpeg may be called from any machines in the cluster, so make sure the executable is installed everywhere. Now that once the ffmpeg location is updated, you most likely either need or want to tweak the parameters passed to ffmpeg. This needs to be changed in two different methods: get_ffmpeg_quicktime_encode_parameterswill return the parameters used when generating a quicktime to be uploaded to Shotgun. get_local_quicktime_ffmpeg_encode_parameterswill return the parameters used when a quicktime is written to disk. For the shotgun upload, we recommend using the default Shotgun encoding settings as a starting point: def get_ffmpeg_quicktime_encode_parameters(self): return "-vcodec libx264 -pix_fmt yuv420p -vf 'scale=trunc((a*oh)/2)*2:720' -g 30 -b:v 2000k -vprofile high -bf 0" For the local Shotgun transcode, we recommend basing your settings on the Shotgun transcode settings but removing the resolution constraints and increasing the bit rate: def get_local_quicktime_ffmpeg_encode_parameters(self): return "-vcodec libx264 -pix_fmt yuv420p -g 30 -b:v 6000k -vprofile high -bf 0" Related Apps and Documents Installation and Updates Adding this App to the Shotgun Pipeline Toolkit If you want to add this app to Project XYZ, in an environment named asset, execute the following command: > tank Project XYZ install_app asset tk-maya tk-flame.45 or higher to use this. - You need Engine version v1.14.4 or higher to use this. Configuration Below is a summary of all the configuration settings used. These settings need to be defined in the environment file where you want to enable this App or Engine. segment_clip_template Type: template Description: Toolkit file system template to control where segment based clip files go on disk. A segment in Flame is a 'block' on the timeline, so a shot may end up with multiple segments. This clip file contains Flame related metadata that Flame uses to deconstruct data as it is being read back into the system. task_template Type: str Default Value: Basic shot template Description: The Shotgun task template to assign to new shots created batch_publish_type Type: tank_type Default Value: Flame Batch File Description: The publish type for Flame batch scripts menu_name Type: str Default Value: Shotgun Shot Export Description: One line description of this profile. This will appear on the menu in Flame. shot_parent_link_field Type: str Default Value: sg_sequence Description: The name of a single entity link field which links shot to a the parent entity type defined in the shot_parent_entity setting shot_parent_task_template Type: str Description: The Shotgun task template to assign to new shot parent entities plate_presets Type: list Description: A list of dictionaries in which you define the various export presets to appear on the profiles dropdown in the user interface. These presets are matched up with export profiles defined inside the settings hook. The list of presets in this setting basically defines the various profiles and their locations on disk, and the hook contains all the details defining how the export should be written out to disk (resolution, bit depth, file format etc). The structure of this setting is a list of dictionaries. Each dictionary item contains three keys: name, publish_type and template. The name parameter is the identifier for the preset - this name will appear in the dropdown in the UI. It is also used to identify the preset inside the settings hook. The publish_type parameter defines the publish type that should be associated with any exported plates when they reach Shotgun. Lastly, the template parameter defines where on disk the image sequence should be written out. shot_parent_entity_type Type: shotgun_entity_type Default Value: Sequence Description: The entity type which shots are parented to in the current setup. settings_hook Type: hook Default Value: {self}/settings.py Description: Contains logic to generate settings and presets for the Flame export profile used to generate the output. batch_template Type: template Description: Toolkit file system template to control where Flame batch files go on disk shot_clip_template Type: template Description: Toolkit file system template to control where shot based clip files go on disk bypass_shotgun_transcoding Type: bool Description: Try to bypass the Shotgun server side transcoding if possible. This will only generate and upload a h264 quicktime and not a webm, meaning that playback will not be supported on all browsers. For more information about this option, please see the documentation..9.2 2018-Oct-09 Fix batch setup thumbnail on new versions v1.9.1 2018-Sep-14 Update to tk-flame v1.14.4 v1.9.0 2018-Sep-07 Use Flame transcoding engine for thumbnail generation v1.8.5 2018-Aug-08 Fix render of a Write File Node coming from a setup loaded with Shotgun Loader v1.8.4 2018-Jul-10 Use source timecode for frame numbering by default v1.8.3 2018-Jul-03 Use source timecode for frame numbering v1.8.2 2018-Jun-07 Fix regexes used to extract frame paths / Upgrade minimum tk-core version v1.8.1 2018-Jun-05 Details: Upgrade tk-core / Fix single frame export v1.8.0 2018-May-17 ??Support for QT 5.9 / Support for Movie Assets v1.7.13 2018-Apr-26 Upgraded to Flame 2019 icons. v1.7.11 2018-Apr-11 Use Flames in out specified int he python hooks to define cut in out v1.7.10 2017-Dec-14 Add support for Multi-Channel OpenEXR v1.7.9 2017-Oct-04 Force aspect ratios to be divisible by 4 v1.7.8 2017-Sep-29 Force aspect ratios to be divisible by 4 v1.7.7 2017-Jul-24 Details: Bug fix: Publishing from a Batch render now set a pertinent publish name. v1.2.2 2017-Jan-23 Support remote Backburner Manager when submitting clip to review with toolkit as a plugin. v1.7.6 2016-Nov-03 Updated Flame icons v1.7.5 2016-Aug-12 Bug fixes to support Flame 2107 export presets. v1.7.4 2016-Aug-01 Updated to support Flame 2107 export presets. v1.7.3 2016-May-23 Updates to use new time code fields for cut schema. v1.7.2 2016-Apr-28 Fixed bug causing export to fail when a user had partially cancelled it. v1.7.1 2016-Apr-25 Added improved Cut support. Details: This adds support for the new shotgun cut schema integration. - When running the export, a new cut entity with associated cut items is created every time. The revision number for the cut gets bumped every time. On older versions of Shotgun, this is skipped. - Cut type is configurable via the app configuration. Cut name is based on the sequence name. - Export now correctly supports handles for versions of flame that emit handle related metadata. This was previously a problem if say 10 frame handles were requested from Flame but there was not enough media to generate a 10 frame handle for a segment - previously, the 10 frame handle length was assumed, now shotgun correctly stores the handle length. - Lots of code cleanup and refactor to make the structure clearer and more decoupled. - Time code data is now generated and supports both NDF and DF modes. Time codes are pushed into shotgun using SMPTE/Avid standard, e.g. HH:MM:SS;FFfor DF and HH:MM:SS:FFfor NDF. - Collating no longer happens - each shot is determined based on the lowest occurrence of the Shot in the timeline stack. Cut items are generated based on this plate layer in the timeline. Moving segments around in element layers above the base layer will not affect the cut, (however versions are generated in Shotgun for all segments). This is different from before, where the length of each shot length was incorrectly computed as an union of all segments belonging to that Shot. v1.6.1 2016-Mar-03 Adds metrics logging v1.6.0 2016-Jan-12 Added configuration options for image sequence start frame and handles. v1.5.1 2015-Oct-06 Added additional compatibility logic for older flame versions. v1.5.0 2015-Oct-05 Added support for 2016.1 new batch render outputPathPattern parameter. v1.4.0 FPS data is now embedded into transcoded quicktimes. v1.3.5 Improvements to transcoding and quicktime generation. Details: - Added support for local quicktime generation - Tweaked default resolution and transcoding parameters for uploaded quicktimes - Ability to turn off quicktime uploads - Ability to customize which version of ffmpeg that is used for transcoding. This is an expert option and for more information, please see the engine documentation. - Ability to upload quicktimes to Shotgun in a way which bypasses server side transcoding. v1.2.1 Misc. performance improvements. Details: - Video media is scaled down to 720 (or whatever suitable resolution is closest) prior to quicktime generation and upload to Shotgun. - Shot update checks are done as single call - Shot creation is done as a single call - Improved UI progress feedback for folder creation - Cut changes and version creation is done in the foreground, as single call - Publishes are done as the first bg job after the export media, all in a single pass - Quicktime generation happens in separate bg jobs as the last thing in the workflow - Improved back burner job names. - General renaming and code cleanup. v1.2.0 Several optimizations and improvements. QA Release only, not intended for general use. v1.1.0 Improved frame range matching. Suppressed compatibility warning messages. Details: - The flame conform and the shotgun in and outs now match exactly. Previously, it was trying to intelligently normalize frame values. - Preset profile compatibility warnings are now handled gracefully and should no longer appear. v1.0.3 Added improved UI feedback when creating new shots in Shotgun. v1.0.2 Minor tweaks to some of the wording in the user interface text. v1.0.1 First official version of the Flame exporter. This resolves a couple of smaller issues that were found in previous (pre-1.0) versions of the app: - Cut tail values are now correct in shotgun (previously off by one due to flame exclusive frame out points) - Fixed issues with PySide when running jobs on the backburner queue.
https://support.shotgunsoftware.com/hc/en-us/articles/219041148-Flame-Shot-Creation-and-Plate-Export
CC-MAIN-2020-29
en
refinedweb
Overview I grew up watching Star Trek: The Next Generation. I've always wanted to build a Star Trek themed device, so I finally got around to remixing one of my old projects to make a Star Trek Display Terminal. The terminal provides the following information: - Weather - using the National Weather Service - Indoor Temperature, Humidity and Volatile Organic Compound (VOC) strength - News Articles - from News.org - Schedule (with alarm function) - from Microsoft Outlook - Fitness Information (Steps, Move Minutes, Heart Points, Weight, Calories Burned) - from Google Fitness - a Resistor color code chart - an LED Resistor Calculator (to determine the resistor value based on current and source power) - Power and Current Measurement tool This information is made available through a combination of APIs and hardware sensors. I leverage an ESP32 for the microcontroller, and leverage the AWS Cloud for all of the data collection and aggregation. I also included a few "easter eggs": Ron McNair homage - Dr McNair is the reason I became an engineer; he grew up 45 mins from my hometown. He died in the Challenger explosion. - The name of my star ship is the "USS Ronald E McNair" - The Registry Number is from Sr McNair's birth date; the Prefix Code is the day he lost his life. - The use of a "prefix code" is a nod to Star Trek: Wrath of Khan (the greatest Star Trek movie of all time; don't @ me). - The numbers of the right of the terminal case refer to my fraternity (1906 - Alpha Phi Alpha) and my alma mater and field of study - (University of Oklahoma, College of Engineering) You have the option of customizing the numbering, lettering, and ship name, registry, etc for your own "easter eggs". Background Last year, I needed a low cost way to measure power and battery drain for a wearable project. I purchased an Adafruit INA219 Featherwing, and used some assorted spare parts to build a simple Power Measurement device (you can read more about it here). This year, I decided to upgrade the device... to make it more "techy". I originally planned to build a working Star Trek tricorder (the Mark IV TR-590 Mark IX version, for those that care)... but I quickly realized that it made more sense to create something that would sit on my desk (I mean, why go to all this trouble to make a cool device, just to close it up and put it in a drawer when not used). So, I I turned to making a version of the computer displays that you see on Star Trek TNG or Voyager (or the assorted movies). I toyed around with different designs, then came across a version created by the Ruiz Brothers of Adafruit. Adafruit does a great job of provided source files for their 3D printed projects, so I was able to take their original version and remix it for my hardware, buttons, and other peripherals. Things to know before your proceed This is a complex project. It's a "multi-disciplined make", that requires the following skills - Arduino IDE - AWS - You will need an account and will need to understand S3, Lambda, and Node JS - Soldering - 3D Printing - I provide step by step instructions for making my version of the project; however, I do not go into details on certain steps (I'll link to supporting instructions or documentation) - There are optional "add ins" to enhance the project in order to get Calendar and Fitness information. The functionality is included in the codebase; however you will have to create "apps" in the Azure and Google clouds to support the features. - This is ultimately customizable... you can swap out the Current Sensor with another featherwing You can use a different feather/wifi combination. Electronic Components - Adafruit ESP32 Huzzah Feather - Adafruit Featherwing Tripler Mini Kit - Adafruit 12-Key Capacitive Touch Sensor Breakout - Adafruit TFT FeatherWing - 3.5" 480x320 Touchscreen - Adafruit BME680 - Temperature, Humidity, Pressure and Gas Sensor - DC Panel Mount 2.1 Barrel Jack (2) - Lithium Ion Polymer Battery - 3.7 V 500mAh - Piezo Buzzer - Mirco USB cable and 5V charger (a typical USB phone charger will work) - Copper Foil Tape with adhesive - Optional - Adafruit INA219 Featherwing - Optional - 2.1 Male plugs - (for use with the INA219 Current Sensor) Link to all electronic components except 2.1 plugs: 3D Filament Components And Optional Paint/Sanding Components - Proto Pasta Conductive PLA - Additional 3D filaments - I used 4 colors - Grey, Black, Aqua (light blue) and White - .25 and 0.4 mm nozzles (I used the 0.25 for the lettering details). Hardware Assembly Components and Tools - M2x5 and M3x5 Screws - Straight and Right Angle Header Pins (See Adafruit wishlist for links) - Soldering Iron (and spool of solder, tip tinner, solder sucker, etc.) - Philips Head Screwdriver Kit - Shrink Wrap - Stranded Wire 22AWG - five or six colors - Solid Wire 22AWG - five or six colors - PCB Vise and Helping hands (optional, but makes soldering easier) - Diagonal Wire Cutters - Wire Strippers - Xacto Knife (for removing supports from 3D printer parts) - 3D Printer (if you plan to print yourself) - Putty or tape (to affix the battery to the inside of the printed case) - Digital calipers - Krazy Glue - Optional - Nitrile Disposable Gloves - Optional - Soldering Mat (optional, but protects surfaces) Note: if you don't have these tools, I suggest you check out Becky Stern's site for recommendations for good options. SoftwareStep 1: Download, Modify Files, and Print 3D Files You can submit the files to a 3D printing service (like 3D Hubs) or you can print your own. Files are available at PrusaPrinters.org. This case is a remix of the Py Portal Alarm Clock featured on Adafruit website. My project uses a similar TFT so, I was able to minimize the amount of design work needed to make the case work with my accessories. I used the following settings for printing: Case - printed at 0.2mm Layer Height. - Supports are needed, but are not needed everywhere (only on the sides and the middle where the keypad sits - Front and Back- printed at 0.2mm Layer Height with a 0.4mm nozzle, no supports - Side Number - printed at 0.10mm Layer Height with a 0.25mm nozzle, no supports - Keys - printed at 0.2mm Layer Height with a 0.4mm nozzle. You will need to print 7 and you will need to print with Proto-Pasta Conductive Filament. A few things you should know: Also, in regards to the side-number piece: - The Star Trek TNG production crew would sprinkle easter eggs in the props. If you look closely at various plaques and panels, you'll see people names, song lyrics, etc. I wanted to create my own "easter egg" for the side number, so I use "06" - which refers to my fraternity (formed in 1906), and "OUCOE" - which refers to my alma mater (University of Oklahoma, College of Engineering). - I created a "blank" side_number piece that you can modify in order to make your own custom number and text. - The Prusa MK3 allows you to change colors at different layer heights. I used this feature for the side-number piece. First, we'll affix the side number. Use a small dab of glue to put the side number in place. Next, we'll assemble the keypad. You'll need to cut 7 pieces of stranded wire - each between 10-12 inches in length. These will be connected to Pins 0-6 of the capacitive touch sensor. I suggest you use different colors (and write the colors/pin mapping down, as you'll need this information later). I used the following color combination: - Yellow - Pin 0/Button 1 - Gray - Pin 1/Button 2 - Red - Pin 2 /Button 3 - Blue - Pin 3 //Button 4 - Green - Pin 4 //Button 5 - White - Pin 5 //Button 6 - Black - Pin 6 //Button 7 - Strip 1/2 in from the end of each wire. - Cut 7 pieces of conductive tape (each about 1/2 inch in width) and solder the wires to the copper side of the tape. - Remove the adhesive backing and stick them to the bottom of the keys. You may need to trim off some of the copper tape. Note: the Keys can either be glued from the bottom (so that they are flush with the top) or glued from the top (so that they "float" a few mm from the top). I chose to glue mine from the top. Once you've completed all 7, use a small dab of glue affix the keys to the keypad. I find it easier to: - First "snake" the wire through the key hole. - Then put a small dab of glue on the ridge/rim of the key - Quickly put the key in place. Note: Krazy Glue works best here; you may want to use gloves to limit accidents and chances of skin irritation.Step 3: Solder/Assemble Components - Part B (Featherwings and Sensors) The next step is to prepare and assemble the hardware components. Ultimately, this means soldering header pins and wires for later use. This guide assumes that you're comfortable with soldering; if not, check out this "Guide to Excellent Soldering" from Adafruit. First we'll prepare out materials. For this step, you'll need: - TFT 3.5 Featherwing - ESP32 Feather - INA219 Featherwing - Tripler Featherwing - MPR121 Capacitive Touch Sensor - BME680 Sensor - Straight and Right Angle Header pins - Solid and Stranded Wire - Soldering Tools and Helping hands - Diagonal Wire cutters and wire strippers - Calipers Note: I suggest you first read through this step and cut all your wires and headers before you start soldering. That way, you won't have to stop to measure/cut. Prepare the TFT 3.5 Featherwing The TFT is ready to use out of the box with the only one adjustment. You'll need to solder a wire between the "Lite" pad and a pin solder pad. Our code uses ESP32 Pin 21 to control the TFT lite. Arrange the TFT the "long" way, with the reset button at the bottom. Pin 21 will be the bottom left pin. Cut a 40mm piece of stranded wire. Strip the ends so that a few millimeters of wire are showing on each end. Using your soldering iron, carefully solder to both pins. Note: you only need about 35mm of length... so you can trim your wire as needed. Also, I find that adding solder to the pad, then to the wire, then soldering the wire to the pad is the easiest approach. Finally - these pads are small... if you're uncomfortable, you can always skip this step: it's only for turning off the TFT with the keypad. Prepare the ESP32 Feather You'll need to solder standard male header pins to the ESP32. Your ESP32 should come with the headers, though you may need to trim them to get to the correct length (16 pins on the long side; 12 pins on the short side). Header pins are made to "snap away", so you can use your diagonal cutters to clip the headers to the correct length. Again, Adafruit has great instructions on how to do that, so check it out if you need guidance. OPTIONAL - Prepare the INA219 Featherwing First, solder male headers to the featherwing (using the same instructions as used for the ESP32). Next cut four 20mm lengths of stranded wire. I would make 2 BLACK and the others a different color. I used GRAY and BLUE for my color choices. Strip the ends of the wire so that 3-4mm of copper wire is exposed on each end. You'll solder one each of each wire as below: - GRAY -> V+ (plus) - BLUE -> V- (minus) - BLACK -> GND (ground) - BLACK -> GND (ground) Leave the other ends of the wires at this time; we'll ultimately solder them to the DC 2.1 plugs. Attach the Piezo Buzzer The INA Featherwing comes with a small prototyping area; we'll use that to attach our piezo. The piezo will give our project the ability to beep and sound alerts, alarms, etc. The piezo connects to ESP32 PIN 13; this correlates to the pin next to the USB pin on the featherwing (see image for arrows). The other piezo pin connects to ground. The pieze pins are long enough to solder them directly to the featherwing... you'll just need to bend the pins into a "bow-legged man" shape (see image). Once you have the pins in place, use a helping hands (or tape) to hold the piezo in place, and solder from the underside of the featherwing. Note - If you do not use the INA219, then you'll need to solder the piezo directly to the featherwing board. Prepare the Tripler Featherwing The featherwing saves us a lot of soldering; it can hold 3 feathers/featherwings... so we'll use it to make electrical connections between the TFT, ESP32, INA219 (as well as the piezo and the TFT Lite pin). To make the connections properly, we'll need to solder two pairs of stacking headers and one pair of standard male headers. - The regular male headers will go on in the "top" spot, but will be soldered to the bottom side of the Tripler. - The two stacking headers will be soldered in spots 2 and 3, on the top side of the Tripler. This is a little confusing, so be sure to look at the images to understand where each header is placed. Also, a combination of a PCB Vise and Helping Hands can greatly aid in soldering the components. Prepare the BME 680 Sensor and the MPR121 Capacitive Touch Sensor The last two sensors are the most difficult the attach. We need to attach header pins to the breakout boards before finalizing the assembly. The BME Sensor is attached at a 90 angle, so that I can align the sensor to a hole in the case (so that the sensor can capture temperature, gas, humidity). You'll need to solder right angle pins to the holes. See the images to ensure you align them correctly. The Capacitive Touch sensor is straightforward - just solder straight male connectors pins, as outlined here. Note: you SHOULD NOT solder pins to the Capacitive Touch Pins (0 - 11). Attach BME 680 and MPR121 Sensors to Tripler Board Both Sensors communicate via I2C... which means we only need to make 4 connections between the breakout boards and the Featherwing. For simplicity, I solder all connections between the boards. BME 680 For this sensor, I use Helping Hands and a PCB Vise to hold both components in place (see image above). The BME680 Sensor should be placed at the end of the featherwing. See the images above to confirm placement. The process of soldering the connections is tedious, so go slowly. I use solid core wire for those connections: - BLACK - GND - RED - VIN - YELLOW - SCL (SCK pin on the sensor to the - ORANGE - SDA (SDA pin on the sensor) Note: The SCL and SDA pins are needed for both sensors, so it might be easier to use a SCL or SDA pin on another part of the Featherwing. MPR121 Helping hands also help when soldering this sensor in place (tape works as well). The code used I2C for communication to the ESP32, so you'll be connecting the SCA and SDA pins.Step 4: Solder/Assemble Components - Part C (Keypad to Capacitive Sensor and Feathering in Case) You'll solder the wires from the Keypad to the Capacitive Touch sensor in this step. Use the same color mapping from earlier. If you followed my color scheme, then you'll solder the colored wires as follows: - Yellow - Pin 0/Button 1 - Gray - Pin 1/Button 2 - Red - Pin 2 /Button 3 - Blue - Pin 3 /Button 4 - Green - Pin 4/Button 5 - White - Pin 5/Button 6 - Black - Pin 6/Button 7 Once soldering is done, use a twisty-tie to hold the wires in place. Next, screw the TFT screen to the "Front" piece. You'll use the M3 screws (four total). Once the TFT is in place, screw the "Front" piece to the case. Again, you'll use M3 screws (two). Next, plug the Featherwing Tripler, with the all components plugged in, to the TFT. Note - If you plan to use a battery, be sure to plug it into the ESP32-JST port before inserting the TFT. Use tape to affix the battery to the inside bottom of the case.Step 5: OPTIONAL - Solder/Assemble Components - Part D (INA219 Feather) If you are using the INS219 sensor, then this is where you attach the wires to the DC plugs. Use a soldering iron to connect the INA219 wires. - The Black wires should go to the GROUND for each DC plug. - The Gray wire should go to the INPUT DC plug - The Blue wire should go to the OUTPUT plug. - Insert the DC plugs to the back cover, and screw them in place. The final step in hardware assembly is to screw the back cover in place - using M2 screws (4). From there, plug in the USB cable, connect it to your PC, and proceed to software steps!Step 7: Prepare AWS Environment As I stated in the intro, the premise of the solution is as follows: - The Terminal, powered by an ESP32, uses an MQTT (over Wifi) connection to communication with the AWS cloud. - The AWS cloud does the bulk of the processing and serves as a relay between the Monitor and the requested services. There are a few things we'll need to do in this step: First, you need to set up your AWS environment, if you haven't yet. This tutorial assumes you have an AWS Account already set-up, so instructions on setting up a cloud account are not included. That being said, the steps are straight-forward and can be found here. Once you're past that step, you need to create a few services, so log into the AWS console. Create a Thing and Download Keys AWS IoT Core facilitates the communication between the AWS cloud and the display. You'll need to create a "thing" and download certificates to support the communication. [Note: most of these instructions were taken from a guide written by Moheeb Zara, AWS Evangelist] Name the policy AllowEverything. Choose the Advanced tab. Choose AllowEverything, Attach. - Open the AWS console and select AWS IoT Core. - In the AWS IoT console, choose Register a new thing, Create a single thing. - Name the new thing "starTrekESP32". Leave the remaining fields set to their defaults. oose Next. - Choose Create certificate. Only the thing cert, private key, and Amazon Root CA 1 downloads are necessary for the ESP32 to connect. Download and save them somewhere secure, as they are used when programming the ESP32 device. - Choose Activate, Attach a policy. - Skip adding a policy, and choose Register Thing. - In the AWS IoT console side menu, choose Secure, Policies, Create a policy. - Paste in the following policy template. - { { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:*", "Resource": "*" } ] } - Choose Create. (Note: This is only recommended to getting started. After you are comfortable with everything working, please go back and change this to something more restrictive.) - In the AWS IoT console, choose Secure, Certification. - Select the one created for your device and choose Actions, Attach policy. - Before you leave, click on "Settings" (on the left menu). Your "Custom endpoint" will be shown; save that to a text file... you'll need it when you configure the ESP32. Create a Blank Lambda file Lambda is a form of serverless compute, so we don't have to worry about any hardware here. Ultimately, this is where we'll place our updated code (which we'll do it a few steps). For now, we just want to create a placeholder, so... Under permissions: - If you know your way around Lambda, and are familiar with it, then you can select whatever option makes sense. You will need permissions on CloudWatch, IotCore, S3 (read and write). - If you are uncertain on permissions, then select "Create a new role with basic Lambda permissions". Later on, we'll modify the permissions. - Log back into the AWS console (if you logged out) and click on Lambda. - Click on the "Create Function" button. - On the next page, enter a basic name, like starTrekDisplay - Select Node.js 12.X - Click Create Function. - After a minute, you'll enter a new screen with a "hello world" code snippet. Scroll down to the bottom to Basic Settings and click "Edit" - Change the timeout from 3 seconds to 2 minutes and 0 seconds. Note: your code should never run longer than 5-10 seconds... however, we need a longer time out for your initial authentication with Microsoft (for calendar functionality). Once you've authenticated, then you can change this to 20 seconds. - Hit save. Create a Iot Rule Choose "Create a New Rule". - Rule Name: ESP Connection - Rule query statement: "SELECT * FROM 'starTrekDisplay/pub' - Stay in the Lambda console and scroll up. Select "Add Trigger". - Select AWS IoT. Then select "Custom Rule". - Click "Add" Create an S3 Bucket and Folder - Navigate to the AWS Console and select S3. - You'll need a bucket and folder to store authentication files. This folder should be private. I suggest you use any bucket you already have and name the create a folder called "starTrekDisplay". Note - if you do not have a bucket, create one using the instructions here. I use the following third party services in the project: - Worldtime API - for time - National Weather Service APIs - for weather - Microsoft Graph API for access to my calendar - Google Fitness API for access to fitness information You will need to set up accounts and download keys in order to leverage the same services Worldtime API - for time This API does not require a key, so no action is needed to make this work. National Weather Service APIs - for weather The National Weather Service API is free, and no API key is required. However, they do request that you pass along contact information (in the form of an email) in every request (as part of the header file). You'll add contact information to the code in the next step. OPTIONAL - Microsoft Graph API and Google Fitness API This is the most complex part of the code set-up. Our device does not have a full-fledged keyboard... therefore we use something called OAUTH for Limited Devices to access our calendar. Unfortunately, you have to create an Azure "app" and a Google App in order for you code to use OAUTH for limited devices. Instructions for creating an app are here for Microsoft fand here for Google. Here are a few things you should know: Microsoft: - You'll be asked to specific what users can use the app. I suggest you select "Accounts in any organizational directory and personal Microsoft accounts". This will allow you to use personal Microsoft accounts and corporate accounts (in most cases). - You'll want to select "Mobile and Desktop" applications, however you don't have to fill out all the information (since this is a personal app). This means that you can't make your app available to the world.... but that's ok in this case - Once your app is set up, you'll need to select the permissions needed. I asked for permissions related to profiles and calendars (see the image in the gallery for the full list of permissions). You'll need to select this same set. If you add more permissions, then you will need to change the scope appropriately in the next step. - You will have to create an Azure and Google cloud account. This is free, and you won't be charged anything This tutorial assumes you are familiar with Node.js development and Lambda. Download the linked file, and make modifications to update: - Microsoft App and Client information - Google Key - S3 bucket name - S3 folder name - AWS Endpoint You'll also need to download the following node libraries: Once those changes are made, upload the code to the placeholder lambda you created earlier.Step 10: Prepare Arduino IDE and Download Libraries This guide also assumes you are familiar with Arduino. You will need to ensure your IDE is set up to work with an Adafruit ESP32. Follow the instructions here if you need assistance. Once this is complete, download the following libraries: - Adafruit_GFX (from the library manager) - Adafruit_HX8357 (from the library manager) - TFT_eSPI (from the library manager) - TFT_eFEX () - PubSubClient (from the library manager) - ArduinoJson (from the library manager) - Adafruit_STMPE610 (from the library manager) - Adafruit_MPR121 (from the library manager) - Adafruit_INA219 (from the library manager) - Adafruit_Sensor (from the library manager) - Adafruit_BME680 (from the library manager) - Tone32 () Next we will need to modify a few of the libraries: - Open the PubSubClient folder (in the Arduino/Library folder) and open PubSubClient.h. Find the value for MQTT_MAX_PACKET_SIZE and change it to 2000. - Next, open the TFT_eSPI folder, and open the User_Setup_Select.h file. Comment out any "includes users_setup..." line and add this line: #include <User_Setups/CUSTOM_TRICORDER_HX8357D.h> Afterwards, download the linked Custom_Tricorder.zip file, and extract to h file to User_Setups folder in the TFT_eSPI folder in your Arduino libraries folder. Now, we can move onto updating the Arduino codeStep 11: Update & Install Arduino Code and Engage! Arduino Code Download and unzip the linked file for the Arduino code. Go to the secrets.h tab. You'll need to update the following: - WIFI_SSID = update with your wifi SSID - WIFI_PASSWORD = update with your wifi password - TIMEZONE = update with your timezone from this list - LAT (you can use a service like "" to find your Latitude and Longitude - LNG - AWS_IOT_ENDPOINT = you should have saved this from earlier. It should look like "dx68asda7sd.iot.us-east1-amazonaws.com" - AWS_CERT_CA - AWS_CERT_CRT - AWS_CERT_PRIVATE You will have also downloaded the certificates from an earlier step. Open then in notes editor (e.g. notepad) and paste the text between the 'R"EOF( ' and ')EOF";'. Be sure to include "-----BEGIN CERTIFICATE-----" or "-----BEGIN RSA PRIVATE KEY-----". Image Files The ESP32 comes with a small filesystem. We use this filesystem to save images for our program. You'll need to install the tool that allows you to upload files. - First, visit the in depth tutorial on Random Nerd Tutorials. - Once you have this working, you can upload the files in the data folder (also included in the zip file). Engage! Upload the final Arduino code, and you're done! Note - The Star Trek name and Star Trek images are owned by CBS/Paramount. They have a fairly lax policy when it comes to cosplay and fan fiction - please read here if you have questions.
https://amazonwebservices.hackster.io/darian-johnson/make-it-so-star-trek-tng-mini-engineering-computer-5718fb
CC-MAIN-2020-29
en
refinedweb
Tiny markdown renderer for Node and browser, with CLI.zeedown Tiny Slack-style markdown renderer for Node and browser, with CLI. Installation npm i -S zeedownor yarn add zeedown Usage zeedown(text: string): string zeedowntakes one parameters: a required string. Difference from standard MD: - Strong is a single set of asterisks - Emphasis is a single set of underscores - Strikethrough is a single set of tildes - No headers This is essentially the same as Slack's basic Markdown. import md from 'zeedown' // or const md = require('zeedown') md('some string') md(process.argv[2]) $('.foo').replaceWith(md($('.bar').text())) CLI zeedowncomes with a small CLI. Usage: cat foo.md | zeedown > foo.html Supported Features - Strong (Bold) - Emphasis (Italic) - Deletions (Strikethrough) - Inline code - Fenced code blocks (not indent) - Blockquotes - Ordered/unordered lists Why Not Feature X? If you want links, images, syntax highlighting, and other fancy stuff, you're probably better off using a full-featured implementation. Try marked, it's super popular. Status Everything specified works. ol, ul, and blockquoteare a little funky (only work if not indented at all, need an extra pass to strip extra tags). Repository https+
https://www.javascriptcn.com/read-77024.html
CC-MAIN-2020-29
en
refinedweb
For some reason, the nuget package manager is not always installed by default. I came across this problem a few times already. What you want to do is this. First: You need to unistall npm(nuget package manager), you did this already, you can try this again, just to be sure :) Visual studio -> Tools -> Extensions and Updates -> Npm -> unistall Second: after you unistall npm, you need to install it again to use it (ofc). There are several ways to install npm: 1) install it under Visual studio -> tools -> extenstions and Updates> npm -> install 2) GOTO, search the install nuget link and click on it, downlaod the .vsix file and install in on your visual studio Third: after the installation, close and re-open visual studio, hopefully you will never get that error. Try if possible rebuilding in CS6. I'm trying to resolve a similar random error 'Error #1009: Cannot access a property or method of a null object reference' and I can only surmise that CC is the culprit. If rebuilding in CS6 solves your problem, please let me know! You need to implement a custom classloader, something like this Class<?> cls = new ClassLoader() { public java.lang.Class<?> loadClass(String name) throws ClassNotFoundException { byte[] a = read class bytes from known location return defineClass(name, b, 0, a.length); }; }.loadClass("test.MyClass"); I have just found a way to add classes to another web project and it worked on my side. First, is to create a .jar file of your main project's classes and add it to your JSFUnit's project properties. You can't create a .jar file for a web project using netbeans IDE, so locate your build.xml from your project's folder and add this: <target name="-post-compile"> <jar destfile="${basedir}/dist/my_web_app.jar"> <fileset dir="${basedir}/build/web/WEB-INF/classes"> </fileset> </jar> I'm not sure about your question 1, but if I understand it correctly, you would like to build with Eclipse two Maven projects where A depends on B? Then you may use an Eclipse plugin like m2eclipse. For your second question, I think the solution would be to use the system scope for your dependency that you can't find in any public repository. Of course, if you can deploy the dependency on an entreprise repository it would be better. The first thing is you are having the wrong credentials for the repository ...(): Access denied to:... Apart from that the repository does not look like Maven repository. And also the gives a page not found which gives me the impression this is not a valid repository. Furthermore i would suggest first to use Maven Central only which means no repository entries in your pom.xml files which is the wrong way at all. Just check if it works without the repository definition. If you really need to define other repository better use a repository manager (Artifactory, Nexus, Archiva). One other thing is that i see in your pom the version which is a released versions already in c I had same issue with the latest Android Studio () today, and I finally found that configuring a https proxy for Gradle can solve the issue. I only set a http proxy at the beginning. Gradle needs to sync some files via HTTPS. Use IP address instead if hostname of proxy can't work. As you said you are using modular appfuse application. In appfuse modular application two module exists: core and web. Web module depends to core module. So you should first install the core module. Try this code in your core module directory: mvn install 1-Take for workspace "MyBook" 2-In eclipse, create a "Java project" ch-01. 3-right click on ch-01 , new, project "MyExampleApple" for each folder like ch-01 do number 2 and 3. :) This will create a subFolder in your Workspace and a project inside this subFolder. workspace subfolder project MyBook Ch-01 MyExampleApple :D Try download the assembly from (the link can be found at ). PS: just copied error message to google. We are using Eclipse here too and have to handle a workspace with more than 200 plug-ins. Every now and then people have similar problems with their workspace and inconsistencies reported in a weird way by Eclipse. What people here usually do is (next step only in case previous step didn't - trying to ContextMenu->Team->Clean/Refresh the whole workspace - creating a new workspace and check out all necessary files from the repository - reinstalling Eclipse to a new directory From my experience after using the Eclipse IDE on a daily basis for many years, it doesn't make very much sense to waste too much time with these issues, unless they aren't solved by one of the steps above. It takes too much time to struggle with these things, while starting from scratch is done in an hour or le Navigate to project folder through console (cmd in windows, terminal in linux) type mvn eclipse:eclipse Press Enter. If it is web application, type mvn eclipse:eclipse -Dwtpversion=2.0 Press Enter. Delete the project (not physically) from eclipse and reimport. Not sure if this is it, but I've ran into a somewhat similar problem with images from ProjectA, where I want them to be shown in ProjectB. This however is not possible in Store Projects, the image should be in the same project and cannot be referenced from ProjectB to ProjectA (without returning any error). A workaround (if possible) would be to put the two projects in the same solution and add the charm flyouts (if they are images) as linked items in ProjectB... Hope this gets you towards a solution. Why don't you do this Right click on Project Root folder > properties > Java BuildPath > Library Tab Check whether any of library is missing (showing error with red cross) if yes then add this library to your project don't forget to CHECK it on Order and Export Do one more thing Right click on Project Root folder > properties > Android > set your Project Build Target check any one target Check your Android-sdk path from Window > Preference > Android The plugin needs updating for more recent versions of Cordova. You can download my Eclipse test project containing the updated plugin code and the resulting APK from here. Note that because of this this bug in Android 4.x I had to replace this line in EmailComposer.java: with ArrayList<String> extra_text = new ArrayList<String>(); extra_text.add(body); extra_text); This works around the problem for plain text emails but won't work for HTML emails because Spanned (returned by Html.fromHtml) is not a subclass of Charsequence. When I tried casting the result of Html.fromHtml() to a string, the tags appeared as part of the text :-( Also when I think problem is that, You have Same Name controller in Both Area and and application. Like You have Home Controller in Normal Application and Also in AREA And, it is causing the Duplicate Declaration of Same Controller. The way doing that is, specifying the NAMESPACE of the Controller Like the Following : public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "BEK_default", "BEK/{controller}/{action}/{id}", new { action = "Index", id = UrlParameter.Optional }, new string[] { "MyAppName.Areas.BEK.Controllers" } // specify the new namespace ); } If not that Case, Please post the Error Message you are getting. Unfortunately CocosDenshion does not support of ARC so far. You have you specify not to compile your cocosDenshion files via ARC. For this go to Targets->Build Phases->Compiles Sources then find the specific file you want to remove ARC from it, once you find that file double click on it and add -fno-objc-arc as flag. Don't extend AbstractController, just annotate using org.springframework.stereotype.controller eg : @Controller @RequestMapping(value = "/") public class RevenueReportController Can you share the tab that you have created? That will help figure out the problem. In general through, all tabs in Glimpse are plugins - including the ones that come with the Glimpse NuGet package, so it seems that some are being picked up but not others. A few troubleshooting tips: Check your bin folder and make sure the extension assembly is there Make sure your extension class is public Turn on logging. Glimpse will tell you what any loading exceptions it may of had. Restart your web server Yes, maven needs to "know" what those directories mean, though a clojure build plugin may use that directory by convention - see for example: Change your dependencies like that dependencies { compile 'com.android.support:support-v4:13.0.+' compile 'org.beanshell:bsh:2.0b4' } You can now remove manually downloaded dependencies in libs directory. What you did was adding libraries to Android Studio project only. You should always add them to Gradle build files as only this is interpreted by Android Build Tools. There is also new version of build tools 18.0.1, you can install them and change version in you build.gradle. As far as I know they can handle aar dependencies better. You can right click on project A from Package Explorer, then click on Properties --> Java Build Path --> Projects --> Add. Add library projects B and C from here. No need to include them on Manifest. Import classes which you want to use on project A's class. If a jar isn't available in a Maven repo and you can't add a dependency in BuildConfig.groovy, you can add it to the lib directory, but it's not automatically discovered. Run grails compile --refresh-dependencies to get it added to the classpath.. The link you've posted shows how to add Morphia if you're using maven to import your dependencies. It sounds like you're not doing this, so what you (probably) want to do instead is download the latest Morphia jar (currently 0.101.0) from: Then to add it to your classpath in Netbeans, see the answer to this question: How to add a JAR in NetBeans You should then be able to use the Morphia library First, make sure that the Oracle.DataAccess Assembly is indeed in place on the system. It might well be that something failed in your installation process. If it is in place, and you still get the error, you will need to install the Oracle client on this system. ODP is just a wrapper using the client. It will not enable a system without an Oracle client to access an Oracle database as far as I remember. Tools + Options, Projects and Solutions, Build and Run. Change the "MSBuild project build output verbosity" setting to Normal. Pay attention to the build output, the messages you see after "XapPackager". Which show which files are getting added to the Xap package. Your DLL needs to be in that list. If it is not then your program will fail as described. In which case you'll need to find out why it is getting skipped. Start that by checking that the Copy Local property of the .winmd reference is True. This *? indication comes from the git plugin that is shipped with Titanium Studio. It indicates that the file is not commited. To answer your "can't find file" - question, I need to know some more details about your environment (target platform, sdk version, example code, ...) I am not sure if I fully understand the question so please forgive if I miss something. I desired a similar setup, multiple projects in a workspace, but all managed by Cocoapods. I needed the projects to link to each other. My motive was promoting MVC separation, so I had an App project (view), a Controller project, a Model project. The shell of a project is here: Here are the basic steps: Create your projects, and add a podspec to each one. (e.g. controller podspec like this one:) Add a Podfile that links all of the podspecs together. If you put a file under /resources it ends up in my-project.war/my-project.war/WEB-INF/classes/. This is where things go in your classpath. Instead, you should put the css file under /webapp. This will make them go under my-project.war/. These files will now be directly accessible by going to You can use Add-Type cmdlet: Add-Type -Path "path-to-dll.dll" See: You can follow this tutorial: Example: Install the jar to your local maven repository: mvn install:install-file -Dfile=cxf-2.7.3.jar -DgroupId=org.apache.cxf -DartifactId=cxf-bundle -Dversion=2.7.3 -Dpackaging=jar Edit the pom.xml file in your project to include the newly added dependency: <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-bundle</artifactId> <version>2.7.3</version> </dependency> This should work regardless of the IDE you are using. The closest example of that is to look for commits like this one: "Added a clone button to support one-click clone". As you can see, more than one .haml file is impacted. This isn't the project menu, but can still give you a starting point. You should find more information in the GitLab forum. You'll need to manually install the driver from Oracle. In command prompt install the driver to Maven's local repository using mvn install:install-file. > mvn install:install-file -Dfile=your_ojdbc.jar -DgroupId=com.oracle -DartifactId=oracle -Dversion=10.2.0.2.0 -Dpackaging=jar -DgeneratePom=true Now you add the dependency to the pom.xml <dependency> <groupId>com.oracle</groupId> <artifactId>oracle</artifactId> <version>10.2.0.2.0</version> </dependency> Maybe you can follow the instructions in chapter 8 in this document. Be also sure of the .classpath file is not hidden in your project folder, when adding your .jar file. It can typically be the issue.. the error message format shows that you are not using Gradle to build. You either loaded the eclipse project file (.classpath/.project) or had some existing iml files. This means changes to build.gradle will have no effect since Studio uses the IntelliJ internal builders instead. You need to import your build.gradle file instead to create a new Studio project. You need to use South (or something similar) to do a Schemamigration. Read the documents to see how to do it. It's quite easy, and will be very helpful later in your project (when you have lots of data in your database). The only other option (other than using another migration tool) is to drop the database and create it again, which would lose you all of your data. The launcher.properties should not be under a folder called Login. It should be placed directly in the src/main/resources/com/abc/xyz folder. It is really as simple as I said but if the resources folder is not marked as a sources folder then this may be the problem. This is the initial class and setup: Now create the resources folder: This newly created folder should be automatically marked as a sources folder and if it is blue color marked then it is. Otherwise you'll have to mark it manually: Now you'll be able to add packages to it: And now you can add the file to it: And rerunning the application will not give you any null value back: And the package view will surely show the launchers.properties file as well:
http://www.w3hello.com/questions/-Error-adding-new-ASP-Net-Web-project-
CC-MAIN-2018-17
en
refinedweb
Perl Basics You write Perl using a simple text editor, like pico or nano. Log on to a UNIX computer and use a text editor to open a file called script.pl, e.g. nano script.pl Perl scripts traditionally end in .pl. This isn’t a requirement, but it does make it easier to recognise the file. Now type the following into the file; print "Hello from Perl!\n"; Save the file. You have just written a simple Perl script! To run it, type perl script.pl This line uses the Perl interpreter (called perl) to read your perl script and to follow the instructions that it finds. In this case you have told Perl to print to the screen the line “Hello from Perl!”. The \n represents a return (newline). Try removing the \n, or adding multiple \n’s and rerunning the script to see what I mean. This was a simple script, but Perl is a language designed to help you write small and simple scripts. Indeed, in my opinion Perl is the best language around for writing small and simple scripts (less than 100 lines of code). This script has introduced three of the basic building blocks of Perl; - A command - A string Hello from Perl!\n. A string is just a piece of text, which can contain multiple lines. Strings are always enclosed in double quotes. - A line of code print "Hello from Perl!\n";. A line of code forms a complete instrucution which can be executed by Perl. Perl executes each line of code, one at a time in order, moving from the top of the file downwards until it reaches the end of the file. Note that each line of Perl code must end with a semicolon ;. A string is a type of variable. A variable is a value in a script that can be changed and manipulated. Variables in Perl are identified using the dollar sign $. For example, use a text editor to write a new Perl script, called variables.pl nano variables.pl Type into the script the following lines (remember to include the semicolons at the end of each line!); $a = "Hello"; $b = "from"; $c = "Perl!"; print "$a $b $c\n"; What do you think will be printed when you run this script? Run the script by typing; perl variables.pl Did you see what you expected? In this script we created three variables, $a, $b and $c. The line $a = "Hello"; sets the variable $a equal to the string Hello. $b is set equal to the string from while $c is set equal to Perl!. The last line is interesting! The $a $b $c\n. However, Perl knows that $a, $b and $c are variables, so it substitutes their values into this string (so $a is replaced by its value, Hello, $b is replaced with from and $c is replaced with Perl!). Thus the Hello from Perl!\n to the screen. Perl can also put numbers into variables. Create a new script (numbers.pl) and write this; $x = 5; $pi = 3.14159265; $n = -6; $n_plus_one = $n + 1; $five_times_x = 5 * $x; $pi_over_two = $pi / 2; print "x equals $x. pi equals $pi. n equals $n.\n"; print "Five times x equals $five_times_x.\n"; print "pi divided by two equals $pi_over_two.\n"; print "n plus one equals $n_plus_one.\n"; What do you think will be printed to the screen when you run this script? Run this script ( perl numbers.pl). Did you see what you expected?
https://chryswoods.com/beginning_perl/basics.html
CC-MAIN-2018-17
en
refinedweb
The error message is from your compiler. It is saying that the file modGPRS.c (in whatever directory it is in, call it A) has included a file A/../inc/intModGPRS.h, which is itself trying to include a file called datapkt.h, which cannot be found. This is either because your makefiles are not properly telling the compiler where the file datapkt.h is, or (more likely) you have not installed a prerequisite library. Make sure you have installed everything that needs to be installed before trying to build whatever it is that you're trying to build. Without at least telling me what it is you're trying to install I can't help you further. This is possibly a duplicate of Benefits of inline functions in C++? The practical performance implication depends on many factors. I would not concern myself with it until you actually have a performance problem, in which case I'm sure bigger gains can be obtained by optimizing other things. Don't keep all your code in headers - if you continue with this trend you will hate yourself later because you will be waiting for your compiler most of the time. LTO is a better approach if you are looking for similar optimizations, and has less of an impact on compile time. in header: extern decltype(generic_type_test<const type_expression_base>)* is_base_type; in cpp: auto is_base_type = generic_type_test<const type_expression_base>; It's the linker that handles all that. The compiler just emits a special sequence in the object file saying "I have this external symbol func, please resolve it" for the linker. Then linker sees that, and searches all other object files and libraries for the symbol. There's no right or wrong answer to this. My personal preference is generally one protocol per header. However, if there are two or more protocols that go logically go together and will usually be imported together, you might put them in the same header file. If your protocols form an API for a framework, that is another reason to put them together so classes that use the framework API can just do one import. But I would recommend not using a generic name like protocol.h, try to think up something more descriptive of what the protocols are actually for e.g. all the protocols and class interfaces for Cocoa are logically imported (nested imports are used) in one header called Cocoa.h. On the second part, I find it generally better to keep protocols and class interfaces in separate heade It seems no. I got Gherkin::Parser::ParseError: ../features/deleting_tickets.feature: Parse error at ../features/deleting_tickets.feature:17. Found step when expecting one of: examples, feature, scenario, scenario_outline, tag. (Current state: tag). (Gherkin::Parser::ParseError) So one more disadvantage met. Basically, you should lock the file while writing or reading. At least, it guarantees that there is no problem. It is the way of good programming!. The example is shown below. <?php $fp = fopen("/tmp/lock.txt", "w+"); if (flock($fp, LOCK_EX)) { // do an exclusive lock fwrite($fp, "Write something here "); flock($fp, LOCK_UN); // release the lock } else { echo "Couldn't lock the file !"; } fclose($fp); ?> More information Create a library, add the directory of the library and the header file to "path". the header file do not have to have a ".h" extension. This is the vecotr file for stl: #ifndef __SGI_STL_VECTOR #define __SGI_STL_VECTOR #include <stl_range_errors.h> #include <stl_algobase.h> #include <stl_alloc.h> #include <stl_construct.h> #include <stl_uninitialized.h> #include <stl_vector.h> #include <stl_bvector.h> #endif /* __SGI_STL_VECTOR */ you can just create a header file named as "prime" and #include <primes> awk 'FNR==1 && NR!=1{next;}{print}' *.csv tested on solaris unix: > cat file1.csv Id,city,name ,location 1,NA,JACK,CA > > cat file2.csv ID,city,name,location 2,NY,JERRY,NY > > nawk 'FNR==1 && NR!=1{next;}{print}' *.csv Id,city,name ,location 1,NA,JACK,CA 2,NY,JERRY,NY > But this would fail for most of the cases where the header has been pre-compiled into object files This sentence makes no sense. A header can be used to compile an object file, but they never get compiled into object files; they are always used externally. Declaring "RewriteEngine On" twice will not hurt For Question2 You do not need start or end with WP I am using some extra codes for security # Disable server signature ServerSignature Off #Protect some files from direct access <FilesMatch "^(wp-config.php|php.ini|php5.ini|install.php|php.info|readme.html|bb-config.php|.htaccess|readme.txt|timthumb.php|error_log|error.log|PHP_errors.log|.svn)"> Deny from all </FilesMatch> # Disable trace track requests RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] # Deny some known bad bots RewriteEngine On RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot mailto:craftbot@yahoo.com [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^ in driver.c you probably have a call to asm_main, try adding extern return_type asm_main(Type1 name1, ..., Typen namen); where return_type is replaced by the return type of the function and typei is the TYPE of the parameter numbered i passed. You need to declare the yyparse function in the testlex.cc file: int yyparse(); This is what is known as a function prototype, and tells the compiler that the function exists and can be called. After looking a little closer at your source I now know the reason why the existing prototype didn't work: It's because you declared it as extern "C" but compiled the file as a C++ source. The extern "C" told the compiler that the yyparse function was an old C style function but then you continued to compile the source with a C++ compiler. This caused a name mismatch. You need to add reference to Thrift.dll located under thrift/lib/csharp/src/bin/Debug/Thrift.dll You better read the documentation OR you can download it as nuget package, check this To install Thrift, run the following command in the Package Manager Console PM> Install-Package Thrift Header files are not compiled. Header files are used by the preprocessor — anywhere you have a #include or a #import the actual text of the original is treated as though you'd copied and pasted it into the original. Hence it doesn't matter if your file is called .hpp, .h or anything else. If a .m file imports a .h file that includes a .hpp file then the .hpp code will be compiled as part of the .m file, i.e. as Objective-C. I am therefore going to guess that you've got GLView.m. If that's going to import a .hpp file, whether directly or indirectly, it needs to be compiled as Objective-C++. One way to do it is to rename it .mm, the other is to tell the project not to try to guess language types by file extension. Compiling a Clojure source file involves evaluating all top-level forms. This is in fact strictly necessary to support the expected semantics -- most notably, macros couldn't work properly otherwise1. If you AOT compile your code, top-level forms will be evaluated at compile time, and then again at run time as your compiled code is loaded. For this reason, it is generally not a good idea to cause side effects in code living at top level. If an app requires initialization, it should be performed by a function (typically -main). 1 A macro is a function living in a Var marked as a macro (with :macro true in the Var's metadata; there's a setMacro method on clojure.lang.Var which adds this entry). Macros must clearly be available to the compiler, so they must be loaded at compile time. Fur Opening compile.el and then searching with C-s (funcall jumped me to: (setq file (if (functionp file) (funcall file) (match-string-no-properties file)))) which seems to be the relevant spot and shows that the functions is indeed called with no arguments and that the match-data is still very much valid, so you can extract the file name with (match-string <the-file-sub-group>). I searched and searched and found several tries to patch but none of them worked. Most of these were patches suggested directly from the Apache dev group. But I finally came across this Apache mail list. It suggests a straight forward solution that the patches could not provide. Compile the gen_test_char app before trying to cross-compile Apache. So I did. And followed the suggestions and it worked like a charm. instead just compile gen_test_char.c 1st with something like: gcc -Wall -O2 -DCROSS_COMPILE gen_test_char.c -s -o gen_test_char then run it and put its output into the include folder (or where its placed normaly); and after this compilation run it to get the desired output with: ./gen_test_char > test_char.h It is normal that you do not get any error. Because your definition and implementation are regarded as one file. Better and usual C++ style is; Header file (myClass.h): You should not include the implementation file (compiler will find it if there is any for you). Implementation (myClass.cpp): #include myClass.h Main program (main.cpp) This will also need #include myClass.h If you implement using this usual style you are expected to get a linkage error only because you shouldn't have separated the definition and implementation of template functions/classes. Author might be referring to this. Looking code of FlatFileItemReader file stop condition is managed; with private field boolean noInput with private function readLine() used in protected doRead() IMHO the best solution is to throw a runtime exception from your skippedLineCallback and manage error as reader exhaustion condition. Foe example writing your delegate in this way class SkippableItemReader<T> implements ItemStreamReader<T> { private ItemStreamReader<T> flatFileItemReader; private boolean headerError = false; void open(ExecutionContext executionContext) throws ItemStreamException { try { flatFileItemReader.open(executionContext); } catch(MyCustomExceptionHeaderErrorException e) { headerError = true; } } public T read() { if(headerError) return nu Just figure out the name associated with each column and use that mapping to manipulate the columns. If you're trying to do this in awk, you can use associative arrays to store the column names and the rows those correspond to. If you're using ksh93 or bash, you can use associative arrays to store the column names and the rows those correspond to. If you're using perl or python or ruby or ... you can... Or push the columns into an array to map the numbers to column numbers. Either way, then you have a list of column headers, which can further be manipulated however you need to. I'll say more about the DLL further down, but for a start, here is the layout of the source files you'll need to do that. You'll need three files: main.c kmp.h kmp.c. Code structure: File main.c #include <stdio.h> #include "kmp.h" // this will make the kmp() function known to main() int main(int argc, const char *argv[]) { char target[200]; ... same code as you aready have } File kmp.h // prototype to make kmp() function known to external programs (via #include) extern int kmp(char *target, int tsize, char *pattern, int psize); File kmp.c #include <stdio.h> #include <stdlib.h> #include <string.h> // declare kmp prototype as DLL-export _declspec(dllexport) int kmp(char *target, int tsize, char *pattern, int psize); // prototype for inte You can include files wherever you want. The problem is OUTPUT. If you're going to be doing header() calls, you cannot have performed ANY output prior to the call. If your includes are simply spitting out output, and are not just defining functions/vars for later use, then you'll have to have to modify the includes to NOT do that output, or at least buffer/defer the output until later. e.g. <?php include('some_file_that_causes_output_when_loaded.php'); header('This header will not work'); will fail no matter what, because your include did output, and kills the header call. But if you mod the include, so you have something more like: <?php include('file_that_just_defines_functions.php'); header('This header will work'); function_from_include_that_causes_output(); will work jus Use of pre-processor directive #include for a header file with no parseable code will not increase the size of the compiled binary. Typically a header file will only usually include declarations - not definitions. So the inclusion of a C header file will not normally increase the size of the binary anyway. For example - in a header file this statement int maxlines; would create the definition of a variable which would be stored in the compiled binary file. The inclusion of the definition would increase the size of the binary. Function declarations and pre-processor tokens such as int parseFiles(const char *file); and #include MAX_LINES 80 however will not increase the program size. One effect of keeping the #include statement will be to increase the time taken to compile slightly. Use the xptemplate plugin. Examples: You're wrong on quite a few points. You're assuming that every DLL has a header file. In reality, there may be 0, 1 or more than 1. You're assuming that 64 bits DLL's have different headers. This is obviously untrue, just think of <windows.h>. You're assuming that DLL versions are in exact sync with headers. Again, think of <windows.h>. You're assuming that function names in a DLL are mangled, in particular re. 32/64 bits. Just think of LoadLibrary(L"MessageBoxW") - that is the mangled name, and "W" only means Unicode. Not 64 bits. You're assuming that every DLL has a .lib file. Equally untrue. Why would a COM component DLL need one? Not exactly answering the question, but using forward declarations and keeping includes in headers minimal might help in avoiding such problems. If you only use forward declarations in your hpp files, they would be independent from each other, and you will be forced to include specific .hpp files into each .cpp unit being compiled. Although your question is a little lacking in details, I think this might at least approach what you are wanting: filename=allresponses_11.txt filenum=$(echo ${filename} | tr -dc 0-9) echo "seq_num file_num hname" for h in $(head -1 ${filename}) do echo "${filenum} ${h}" done | cat -n Unfortunately no, that isn't possible as the C# compiler doesn't understand what to do with .h files. Even it if did, it still illegal to have un-scoped variables declarations (constant or otherwise) in .NET. You'll have to convert the file either by hand, or as mentioned in the comment by Joachim Pileborg, build a utility to auto-convert it to C# code for you. You can use the PHP's readfile () to force a file download, if that's what you are trying to do. There is an example on that page shows you how to download an image. The PHP header function sends a raw HTTP header to the client. The location header is usually used to redirect the client to a new page in PHP. I don't know about that error, but Your second reference to indexPath on this line can't be right: NSString *selectedPhotos = [tagImages[indexPath.section] objectAtIndex:indexPath]; The objectAtIndex parameter needs an integer. You probably meant: NSString *selectedPhotos = [tagImages[indexPath.section] objectAtIndex:indexPath.item]; or, to be more consistent (either use objectAtIndex or use the [] syntax, but using both is a little curious): NSString *selectedPhotos = tagImages[indexPath.section][indexPath.item]; You haven't shown the declaration of tagImages, nor it's initialization, but I assume it is tagImages an array of an array of strings? If so, the above correction should fix it. If not, you should tell us how tabImages is populated/structured. If selectedPhotos is a NS Your current code does not output a file, it just sends headers. in order for your script to work add the following code after your fclose statement. $data = file_get_contents($file); echo $data; If you set the format parameter to 1, it'll make an associative array of the headers: $headers = get_headers($url, 1); $content_type = $headers['Content-Type']; See the documentation of get_headers for more information. random is introduced in C++11 so add this to your g++ option: --std=c++0x or --std=gnu++0x The option is probably in your makefile. Alter the include path to use a directory controlled by you before it uses the directory holding "foo.h" In the directory controlled by you make a symlink called "foo.h" which points to "my_foo.h" as the target The only function that you can call is GetTPIErrorDescription_VB. All the others use C++ classes that you cannot access. So I suggest that you do the following: Remove all the other functions from the header file. Remove the #include and the using lines. Remove the #ifdef and replace TRACKERERRORSDLL_VB with __stdcall. Either include windows.h or add some #define statements for the Win32 types. Possibly deal with the bool type depending on whether or not MATLAB knows how to deal with it. If MATLAB won't recognise it, replace bool with int. At that point the call to loadlibrary should work and then you just need to write the code that calls calllib. The resulting header file might look like something this: #define LPSTR char* __declspec(dllimport) bool __stdcall GetTPIErrorDescriptio Simple Rule: Include the header file only in those files which need it. If your source or header file does not use any constructs defined/declared in the header file then there is no need to include that header. Doing so only brings unnecessary code in to those translation units thereby corrupting the namespace and possibly increasing the compilation time. In the C preprocessor, ## is the token concatenation operator. So lcname##_ can be read as "create a new token by putting _ at the end of lcname". I presume that the quoted preprocessor code is defining a preprocessor macro LAPACK_GLOBAL which is intended to be used as follows: #define LAPACK_something LAPACK_GLOBAL(something, SOMETHING) after which any use of LAPACK_something will be substituted by one of the following: something something_ SOMETHING depending on the environment. In the header file,you can declare all class fields and functions,that you will implement in the cpp file. You should'nt declare the variables that you will use inside your class functions(because there are like temp variables),But you can declare(and should) const varables,that you will use in your progamm. You should'nt use extern,extern its for variables that you dont define them,and the compiler doesnt bother you about that,because when its extern,that means that the variable is defined somewhere else in your progamm. extern is also used for functions that you want to treat your c++ code,as it were in C. like: extern "C" { //code. } or extern "C" int func(int a); In conclusion- delcare const variables in .h file,and the classes.else - in cpp(unless its inline). in your case: c What you're looking for is what I call a "lop" operation, which is kind of like a truncate, but at the front of the file. I posted a question about it some time back. Unfortunately, there is no such thing available.
http://www.w3hello.com/questions/-Asp-net-Header-file-scenario-please-compilation-err-
CC-MAIN-2018-17
en
refinedweb
Search Create 355 terms mdm77 Fundamentals of Nursing Finals: From ATI Concepts, Outlines and Definitions for Foundations of Nursing Key terms, definitions, lists, concepts for ATI Fundamental Finals from Fundamentals for Nursing 7.0, Review Module STUDY PLAY The Joint Commission Sets quality standards for accreditation of health care facilities Utilization review committee Monitors appropriate diagnosis and treatment of hospitalized clients Federally funded health care programs 1) Medicare - for clients 65 years or older or with permanent disabilities (premiums applied either through (a) Insurance programs - DRG; or (b) MCO (Managed Care Organizations) - enrolled clients receive a comprehensive care overseen by a primary care provider 2) Medicaid - for clients with low incomes; State determines eligibility requirements Privately funded health care plans a) Traditional - fee for service b) MCO - Managed Care organization; primary care provider focuses on prevention and health promotion c) PPO - preferred provider organization; client chooses from a list d) EPO- exclusive provider organization; client chooses from a list within a contracted organization e) LTC - long term care insurance; expenses not covered by Medicare Levels of health care a) Preventive - focus on education (immunization, stress management, seat belts) b) Primary - emphasizes health promotion (prenatal and well baby care) c) Secondary - diagnosis and treatment of emergency, acute illness or injury (acute inpatient care, ER) d) Tertiary - specialized highly technical care (Oncology, burn centers) e) Restorative - intermediate follow up care for restoring health (home health care, rehab centers) f) Continuing health care - addresses long term or chronic health care needs (hospice, adult day care) Interdisciplinary personnel (non-nursing) a) clergy b) registered dietitian c) laboratory technician d) occupational therapist e) Pharmacist f) Physical Therapist g) Provider - MD, DO h) Radiologic technologist i) Respiratory therapist j) Social worker k) Speech Therapist Nursing Personnel a) RN b) LPN/LVN c) UAP - CNA, CMA Expanded Nursing Roles a) APN - advanced practice nurse ( CNS - clinical nurse specialist; CRNA - certified registered nurse anesthetist; NP - nurse practitioner; CNM - certified nurse-midwife) b) Nurse Educator c) Nurse Administrator d) Nurse Researcher ETHICS Based on an expected behavior of a certain group in relation to what is considered right or wrong. Study of conduct and character MORALS Values and beliefs held by people that guide their behaviors and decision making ETHICAL THEORY Examines different principles, ideas, systems and philosophies used to make judgments about what is right and wrong, good or bad. a) utilitarianism - the value of a thing is based on its use; end justifies the means b) deontology - moral rights and duties ETHICAL PRINCIPLES related to the treatment of clients a) Autonomy b) Beneficence c) Fidelity d) Justice e) Nonmaleficence AUTONOMY Ability of client to make personal decisions even when they may not be in client's best interest BENEFICENCE Agreement that acre given is in the best interest of the client; taking positive actions to help others FIDELITY Agreement to keep one's promise to the client about care that was offered. JUSTICE Fair treatment in matters related to physical and psychosocial care and use of resources NONMALEFICENCE Avoidance of harm or pan as much as possible when giving treatments Ethical Dilemmas Problems about which more than once choice can be made and the choice made is influenced by the values and beliefs of decision makers. A problem in health care is an ethical dilemma if : a) it cannot be solved solely be a review of scientific data b) it involves a conflict between two moral imperatives c) the answer will have a profound effect on the situation/client Documents used to help solve ethical dilemmas a) ANA Code of Ethics for Nurses (2001) b) ICN Code of Ethics for Nurses (2006) --------------------------------------------------------- for LPN/LVN - Code of Ethics by NAPNES Basic principles of Nurse's code of Ethics a) Advocacy - support the cause of client b) Responsibility - willingness to respect obligations c) Accountability - ability to answer for one's actions d) Confidentiality - protection of privacy without diminishing care UDDA - Uniform Determination of Death Act Document used to assist on issues regarding end-of-life and organ donor issues FEDERAL regulations and laws that impact nursing practice a) HIPAA - health insurance portability and accountability Act b) ADA - Americans with Disabilities Act c) MHPA - Mental Health Parity Act d) PSDA - Patient self determination Act CRIMINAL LAW Subsection of public law and relates to the relationship of an individual with the government. CIVIL LAW Protects the individual rights of people. One type of civil law that relates to the provision of nursing care is TORT LAW TORTS a) Unintentional Torts (Negligence, Malpractice) b) Quasi-intentional Torts (Breach of confidentiality, defamation of character) c) Intentional Torts (Assault, Battery, False Imprisonment) NEGLIGENCE A nurse FAILS TO implement safety measures for a client who has been identified as at risk for falls MALPRACTICE (Professional negligence) A nurse administers a large dose of medication due to a calculation error. The client has a cardiac arrest and dies. BREACH OF CONFIDENTIALITY A nurse releases the medical diagnosis of a client to a member of the press DEFAMATION OF CHARACTER A nurse tells a colleague that she believes the client has been unfaithful to her spouse ASSAULT The conduct of one person makes another person fearful or apprehensive (threatening to place a nasogastric tube in a client who refuses to eat) BATTERY Intentional and wrongful physical contact with a person that involves an injury or offensive contact (restraining a client and administering an injection against her wishes). FALSE IMPRISONMENT A person is confined or restrained against his will (using restraints on a competent client to prevent his leaving the health care facility) COMPACT STATE In relation to licensure, a nurse who resides in a compact State is allowed to practice in another compact State under a multi-State license. PROFESSIONAL NEGLIGENCE Failure of a person with professional training to act in a reasonable and prudent manner. Reasonable and Prudent Used to describe a person who has the average judgment, intelligence, foresight, and skill that would be expected of a person with similar training and experience. FIVE ELEMENTS necessary to prove NEGLIGENCE 1. Duty to provide care as defined by standard 2. Breach of duty by failure to meet standards 3. Foreseeability to harm 4. Breach of duty has potential to cause harm (combines 2 and 3) 5. Harm occurs Ways to AVOID being liable for NEGLIGENCE 1) Follow standards of care 2) Give competent care 3) Communicate with other health team members 4) Develop a caring rapport with clients 5) Fully document assessments, interventions and evaluations PATIENT'S (Client's) Rights a) Informed consent b) Right to refuse treatment c) Advanced directives d) Confidentiality e) Information security American Hospital Association (AHA) identifies patient's rights in Health Care settings Contained in "The Patient Care Partnership" Informed Consent Written and signed permission (document) for a procedure or treatment to be performed. Elements of an Informed consent a) Reason for treatment or procedure b) How procedure will benefit the client c) Risks involved if client agrees to the procedure d) Other options available to treat the problem Nurse's ROLE in an INFORMED CONSENT a) WITNESS THE CLIENT's SIGNATURE b) Ensure that informed consent has been appropriately obtained Who CAN SIGN an INFORMED CONSENT? a) a competent adult b) someone who is capable of understanding the information provided. AUTHORIZED individuals to grant consent for another person: a) Parent of a minor b) Legal guardian c) Court-specified representative d) A person with a durable Power of Attorney (POA) for health care e) Emancipated minors (independent minors such as married minors) Patient Self Determination Act (PSDA) Upon admission to a Health Care facility, a client must be informed of their right to accept or refuse care. AMA (against medical advice) form Client must sign this document when a decision is made to leave the facility without a discharge order. Standards of Care (Practice) Legal parameters of practice, define and direct the level of care that should be given by a practicing nurse. They are used in malpractice lawsuits to determine if that level was maintained. Where can the "Standards of Care" be found? in the NURSE PRACTICE ACT of each State SCOPE OF PRACTICE That which belongs to the area of competence as determined by training, education and licensing requirements. ADVANCED DIRECTIVES Communicates a client's wishes regarding and end-of-life care should the client become unable to do so. PSDA requires that all health care facilities ask if a patient has advanced directives upon admission. Types of ADVANCED DIRECTIVES a) Living Will - expresses client's wishes regarding medical treatment in the event the client becomes incapacitated and is facing end of life issues b) Durable Power of Attorney for Health Care- designates a health care proxy who is an individual authorized to make health care decisions for a client who is unable c) Provider's Orders - Unless a DNR (do not resuscitate) or AND (allow natural death) order is written, nurse initiates CPR when client has no pulse or respiration. Written order for DNR or AND must be placed in client's medical record. Mandatory Reporting Nurses are mandated to report any suspicion of abuse (child or elder abuse, domestic violence) following facility policy. Communicable disease Reporting Nurses are also mandated to report to the proper agency (local or State health department) when client has been diagnosed with a communicable disease (e.g. tuberculosis, hepatitis A) Client's CHART OR MEDICAL RECORD Legal record of care: confidential, permanent and legal document that is admissible in court. Information included in the Client's Chart a) Assessments b) Medication Administration c) Treatments given and the client's responses d) Client education PURPOSES for MEDICAL RECORDS a) communication b) legal documentation c) financial billing d) education e) research f) auditing/monitoring Purpose of reporting Provide continuity of care Good qualities of DOCUMENTATION a) Factual - Subjective and Objective b) Accurate and Concise c) Complete and Current d) Organized LEGAL GUIDELINES OF a GOOD DOCUMENTATION a) Begin entry with DATE and TIME b) Record legible with non-erasable black ink; do not LEAVE BLANK SPACES c) Do NOT USE Correction Fluid d) Information inadvertently omitted may be added as "LATE ENTRY". Must include time charting was done and the time the charting reflects e) Should reflect assessments, interventions, and evaluations performed by the person signing the entry. f) for electronic charting, password must be PRIVATE DOCUMENTATION FORMATS a) Flow Charts - record and show trends in vital signs, blood glucose levels, pain level and other frequently performed assessments b) Narrative Documentation - records information as a sequence of events c) Charting by exception (CBE) - standardized forms that identify normal findings/values and allows selective documentation of abnormal findings. d) Problem oriented medical records (POMR) - consists of database, problem list, care plan, and progress notes. POMR (Problem oriented medical records) a) SOAPIE b) PIE c) DAR (focus charting) SOAPIE S = subjective data O = objective date A = assessment (includes Nursing Diagnosis based on assessment) P = Plan I = Intervention E = Evaluation PIE P = problem I = intervention E = evaluation DAR (focus charting) D = data A = action R = response Change-of-shift Reports Given at the conclusion of each shift by the nurse leaving to the nurse assuming responsibility for the client. Transcribing Medical orders a) Have a second nurse listen to telephone order b) Repeat back the order given: medication name, dosage, time and route c) Document reading back the order and presence of nurse d) Question any order that may seem contraindicated due to a previous order or due to the client's condition TRANSFER REPORTS SHOULD INCLUDE CLIENT's: a) demographic information b) medical diagnosis c) directives d) most recent vital signs e) current prescribed medication f) allergies g) diet and activity orders h) special equipment i) advanced directives j) family involvement Components of the Privacy Rule of HIPAA: a) Only health care team members directly responsible for client care should be allowed access to client records. b) Clients have the right to read and obtain a copy of their medical record, and facility policy should be followed when client requests as such. c) No part of the client chart can be copied except for authorized exchange of documents between health care institution or health care providers. d) Client medical records must be kept in secure area e) EMR should be password protected f) Health care workers should only use their own passwords INFORMATION SECURITY PROTOCOLS a) Logging off from computer before leaving work station b) Never share a user ID or password with anyone c) Never leave a client's chart or other patient document where others can access it. d) Shred any printed or written client information used for reporting or client care after it is no longer needed. Five RIGHTS of DELEGATION a) Right Task b) Right Circumstance c) Right Person d) Right Direction/Communication e) Right Supervision Nursing Process (overview) a) Cyclical, critical thinking process that consists of five steps; variation of scientific reasoning b) client centered, problem solving and decision making framework c) Provides the framework throughout which the nurse can apply knowledge, experience, judgment and skills to the formulation of a nursing care plan d) Includes five sequential but overlapping steps e) allows nurse to integrate creative critical thinking f) Promotes professionalism Five steps of Nursing Process 1) Assessment/data collection 2) Diagnosis - Analysis/data collection 3) Outcome - planning 4) Implementation - intervention 5) Evaluation Methods of Data Collection 1) Observation 2) Interviews 3) Comprehensive or focused physical examination 4) Diagnostic and laboratory reports 5) Collaboration Two Types of DATA a) Subjective Data - Symptoms; what patient says b) Objective Data - Signs; what is seen in a patient Sources of DATA a) Primary = the CLIENT b) Secondary = all other sources which does NOT COME FROM THE client Nurse engages in three types of PLANNING a) Comprehensive plan of care upon admission b) Ongoing planning based upon new information and new assessments and new evaluations c) Discharge planning. Begins as soon as the client is admitted Maslow's hierarchy of needs A framework used as a guideline to set priorities Types of INTERVENTIONS a) Nursing initiated/independent interventions - within Scope of Practice as identified by ANA Standards of Practice, State Nurse Practice Act and health care facility policies b) Physician initiated/dependent interventions - the nurse initiates as a result of a physician's order c) Collaborative intervention - the nurse carries out in collaboration with other health care team professionals. Components of CRITICAL THINKING a) Knowledge b) Experience c) Competencies d) Attitudes e) Intellectual and Professional Standards Levels of Critical Thinking a) Basic Critical thinking - results from limited knowledge and experience b) Complex critical thinking - expresses autonomy by analyzing and examining data to determine best alternatives c) Commitment - results from an expert level of knowledge, experience, developed intuition, and reflective flexible attitudes Assessment/Data collection Collect information about client's present health status to identify client's needs and to identify additional data to collect based on nurse's findings DIAGNOSIS: Analysis/Data collection Interpret or monitor the collected database, reach an appropriate nursing judgment about client's health status and coping mechanisms, and provide direction for nursing care OUTCOME: Planning Establish priorities and optimal outcomes of care than can be measured and evaluated, then select the nursing interventions to include in a client's plan of care to promote, maintain, or restore a client's health Implementation provide client care based on assessment data gathered, analyses done, and the plan of care developed in the previous steps of the nursing process. Evaluation Examine a client's response to nursing interventions and form a clinical judgment about the extent to which goals and outcomes have been met. Attitudes of a critical thinker 1) Confidence - feels sure of abilities 2) Independence - Analyzes ideas for logical reasoning 3) Fairness - objective, nonjudgmental 4) Responsibility - practices according to standards of practice 5) Risk taking - takes calculated chances in finding better solutions to problems 6) Discipline - Develops a systematic approach to thinking 7) Perseverance - continues to work until problem is resolved 8) Creativity - using imagination to find solutions unique to client problems 9) Curiosity - requires more information about clients and problems 10) Integrity - practices truthfully and ethically 11) Humility - Acknowledges weakness 12) Standards - Model to which care is compared to determine acceptability, excellence and appropriateness Asepsis Absence of illness-producing micro-organisms, maintained through the use of aseptic technique with hand hygiene as the primary behavior associated with asepsis/aseptic technique. Two types of Asepsis a) Medical Asepsis - clean technique - reduction of micro-organisms b) Surgical Asepsis - sterile technique - elimination of micro-organisms Components of Hand washing a) Soap b) Water c) Friction Length of time required for hand washing at least 15 seconds to remove transient flora and up to 2 minutes when the hands are soiled. Iatrogenic infection Type of HAI resulting from a diagnostic or therapeutic procudure Signs and symptoms of generalized systemic infection a) Fever b) Increased pulse and respiratory rate (in response to high fever) c) Malaise d) Anorexia, nausea, and/or vomiting e) Enlarged lymph nodes (repositories of waste) HAI - Health-care Associated Infection Formerly known as nosocomial infection, they are acquired while receiving care in a health care setting. These can come from an exogenous source (outside the client) or an endogenous source (inside the client) Inflammation Body's local response to injury or infection 3 Stages of the inflammatory response I. FIRST STAGE - inflammatory response (local infection) a) Redness from dilation of arterioles bringing blood to the area b) Warmth of the area on palpation c) Edema d) Pain or tenderness e) Loss of use of the affected part II. SECOND STAGE - micro-organisms have been killed. Fluid accumulates and exudate appears at the site of infection a) Serous b) Sanguineous (contains red blood cells) c) Purulent (contains leukocytes and bacteria) III. THIRD STAGE - damage tissue is replaced by scar tissue. Gradually new cells take on characteristics that are similar in structure and function of the old cells. Laboratory results indicating infection a) Leukocytes (WBC more than 10,000 ul) b) Increases in the specific type of WBCs on differential (left shift = an increase in neutrophils) c) Elevated erythrocyte sedimentation rate (ESR) d) Presence of micro-organisms on culture of the specific fluid/area COMPONENTS OF THE CHAIN OF INFECTION a) Infectious agent - bacteria, virus, fungi, protozoa b) Reservoir - where infectious agent grows (wound drainage, food, oxygen tubing) c) Exit - portal of the infectious agent (skin, respiratory or GI tract) d) Means of transmission (droplet, person to person contact, touching contaminated items) e) Entry - portal to a susceptible host (same as Exit) f) Host - must be susceptible to infectious agent Infection control for immobile clients Ensure that pulmonary hygiene (turning, coughing, deep breathing, incentive spirometry) is done every 2 hours, or as prescribed. Good pulmonary hygiene decreases the growth of micro-organisms and the development of pneumonia by preventing stasis of pulmonary excretions, stimulating ciliary movement and clearance, and expanding the lungs Isolation guidelines a) group of actions that include hand hygiene and the use of barrier precautions b) applies to every client, regardless of diagnosis and must be implemented whenever contact with a potentially infectious material is anticipated c) PPE is changed after contact with each client and between procedures with the same client if in contact with large amounts of blood and body fluids. STANDARD PRECAUTIONS (Tier One) a) applies to all body fluids (except sweat), nonintact skin, and mucous membranes. b) Hand hygiene using alcohol based waterless product is recommended after contact with client, body fluids, and contaminated equipment. c) Alcohol-based waterless antiseptic is preferred unless hands are visible dirty, because alcohol based product is more effective in removing micro-organisms. d) Clean gloves are worn when touching body fluids, nonintact skin, mucous membranes, and contaminated equipment e) Gloves are removed and hand hygiene performed between each client. f) Masks, eye protection, and/or face shield are required when care may cause splashing or spraying of body fluids. g) Hand hygiene is required after removal of the gown. A sturdy moisture-resistant bag should be used for soiled items and the bag should be tied securely in a knot at the top. h) All equipment used for client care is to be properly cleaned, one time use should be disposed accordingly. i) Contaminated laundry should be bagged and handled to prevent leaking or contamination of clothing or skin j) Safety devices on all equipment/supplies must be enabled after use; sharps must be disposed of in a puncture-resistant container k) A private room is not needed unless the client is unable to maintain appropriate hygiene practices. TRANSMISSION PRECAUTION (Tier Two) a) Airborne precautions are used to protect against droplet infections smaller than 5 mcg (measles, varicella, pulmonary or laryngeal tuberculosis). Airborne precaution require: (1) Private room; (2) Mask, respirators for providers and visitors; N95 or HEPA (high efficiency particulate air) respirator if patient is suspected to have TB; (3) Negative pressure airflow exchange in the room of at least six exchanges per hour. b) Droplet precautions against droplet larger than 5 mcg (streptococcal pharyngitis or pneumonia, scarlet fever, rubella, pertussis, mumps, mycoplasma pneumonia, meningococcal pneumonia/sepsis, pneumonic plague). Droplet precautions require: (1) Private room or a room with other clients having the same infection; (2) Masks for providers and visitors c) Contact precautions - protect visitors and caregivers against direct client/environmental contact infections (respiratory syncytial virus, shigella, enteric diseases caused by micro-organisms, wound infections, herpes simplex, scabies, multi-drug resistant organisms. Contact precautions require: (a) private room or room with other client with the same disease; (b) gloves and gown worn by caregivers and visitors; (c) disposal of infectious dressing material into a single, nonporous bag without touching the outside of the bag. TRANSPORTING THE CLIENT If movement of client to another area is not avoidable, take precautions to ensure that the environment is not contaminated. For example, a surgical mask is placed on the client with an airborne or droplet infection, and a draining wound is well covered. Guidelines for cleaning contaminated equipment 1) Always wear gloves 2) Rinse first in cold water (hot water coagulates proteins) 3) Wash the article in hot water with soap 4) Use a brush or abrasive to clean corners 5) Rinse well in warm or hot water 6) Dry the article - considered clean at this point 7) Clean the equipment used in cleaning 8) Remove gloves and perform hand hygiene Older Adults increased risk for falls a) Due to decreased strength b) Impaired mobility and balance c) Limited endurance d) Decreased sensory perception Other clients with increased risk for falls 1) Those with decreased visual acuity 2) Generalized weakness 3) Gait and balance problems (CP, Injury, MS) and cognitive dysfunctions 4) Side effects of medication (orthostatic hypotension, drowsiness) GENERAL MEASURES to PREVENT FALLS 1) Call light location and use 2) Respond to call lights in a timely manner 3) Orient client to setting 4) Place client at risk for falls close to the nursing station 5) Ensure bedside table and frequently used items are within reach 6) Maintain bed in low position 7) For clients who are sedated or unconscious, bed rails are up, bed is kept low 8) Avoid the use of full side bed rails for clients who get out of bed without assistance 9) Provide clients with nonskid footwear 10) Keep floor free from clutter with a clear path to the bathroom 11) Keep assistive devices nearby after validation of use 12) Lock wheels on beds, wheelchairs and carts 13) Use chair or bed sensors for clients at risk for getting up unattended to alert staff SEIZURES Sudden surge of electrical activity in the brain. May occur anytime, may be due to epilepsy, fever or a variety of medical conditions. Partial seizures are surges in one part of the brain. Generalized seizures involve entire brain. Seizure precautions (most important) 1) Do not put anything in a client's mouth (except for status epilepticus, where an airway is needed) in the event of a seizure 2) Do not restrain a client in an event of a seizure. Lower him to the floor or bed. Protect head, remove nearby furniture, provide privacy, put client on his side with head flexed slightly forward, loose clothing to prevent injury 3) Stay with the client and call for help 4) Administer medication as ordered 5) After seizure, explained what happened to client. Provide comfort. 6) Document thoroughly: duration, behavior, description, length, injury, aura, postictal state, and report to provider Two types of restraints: a) Physical b) Chemical Seclusions and restraints must never be used for: a) convenience of the staff b) punishment for the client c) clients who are extremely physically or mentally unstable d) clients who cannot tolerate the decreased stimulation of a seclusion room Restraints should: a) never interfere with treatment b) restrict movement as little as is necessary to ensure safety c) Fit properly d) Be easily changed to decrease the change of injury and provide for the greatest level of dignity A prescription for restraint should contain the following: a) reason for restraint b) type of restraint c) location of the restraint d) how long the restraint may be used e) type of behaviors demonstrated by the client that warrant use of restraint How often should a physician rewrite a prescription for restraint? every 24 hours Frequency of client assessment in regards to food. fluid, comfort and safety in relation to a restraint every 15 to 30 minutes Other important things to know and do about a restraint a) Always explain the need for restraint to client b) Obtain signed consent from client or guardian c) Review manufacturer's instructions for correct application d) Remove or replace restraints for good circulation e) Pad bony prominences f) Use quick release know to tie restraint to the bed frame g) Ensure that restraint is loose enough to fit two fingers between device and the client h) Never leave client unattended without restraint Fire response in a health care setting follows this pattern R - Rescue, protect and evacuate clients in close proximity to the fire A - Alarm, Report the fire by setting off the alarm C - Contain the fire by closing the doors and windows as well as turning off any sources of oxygen. Clients who are on life support are ventilated with a bag-valve mask E - Extinguish the fire if possible using an appropriate fire extinguisher 3 Classes of fire extinguisher Class A = paper, wood, upholstery, rags or other types of trash Class B = flammable liquids, and gas fires Class C = for electrical fires to USE a FIRE EXTINGUISHER P = pull the pin A = aim at the base of the fire S = squeeze the lever S = sweep motion back and forth over the fire Factors that contribute to a client's risk for injury a) Age and development status b) Mobility and balance c) Knowledge about safety hazards d) Sensory and Cognitive awareness e) Communication skills f) Home and work environment g) Community in which the client lives Risk factors for falls among older adults 1. Physical, cognitive and sensory changes 2. Changes in the musculoskeletal and neurological systems 3. Impaired vision and/or hearing 4. Frequent trips to the bathroom at night because of nocturia and incontinence Places an older adult at risk for burns and other type of tissue injury Decrease in tactile sensitivity Modifications that can me made to improve home safety for Older Adults 1. Remove items that could cause a client to trip, such as throw rugs and carpets 2. Place electrical cords and extension cords against a wall behind furniture 3. Make sure that steps and sidewalks are in good repair 4. Place grab bars near the toilet and in the tub or shower and install a stool riser 5. Use a non-skid mat in the tub or shower 6. Place a shower chair in the shower 7. Ensure that lighting is adequate both inside and outside of the home Some HAZARDS of SMOKING: a) Passive smoking is the unintentional inhalation of tobacco smoke b) Exposure to nicotine and other toxins places people at risk for numerous diseases including cancer, heart disease, and lung infections. c) Low birth weight infants, prematurity, still births, and sudden infant death syndrome (SIDS) have been associated with maternal smoking. d) Smoking in the presence of children is associated with the development of bronchitis, pneumonia, and middle ear infections HAZARDS of CARBON MONOXIDE 1. Carbon monoxide is a very dangerous gas because it binds with hemoglobin and ultimately reduces the oxygen supplied to the tissues in the body. 2. Carbon monoxide cannot be seen, smelled or tasted 3. Symptoms of carbon monoxide poisoning include: nausea, vomiting, headache, weakness and unconsciousness 4. Death may occur with prolonged exposure 5. Measures to prevent Carbon Monoxide poisoning include proper ventilation when using fuel-burning devices (lawn-mowers, wood burning and gas fireplaces) 6. Gas burning furnaces, water heaters and appliances should be inspected annually. 7. Carbon monoxide detectors should be installed and inspected regularly Hazards of FOOD POISONING a) Food poisoning is a major cause of illness in the US b) Most food poisoning is caused by some type of bacteria such as Escherichia coli, Listeria monocytogenes, and Salmonella c) Most food poisoning occurs because of unsanitary food practice d) Very young, very old, pregnant and immunocompromised individuals are at risk for complication e) Performing hand hygiene, ensuring that meat and fish are cooled to the correct temperature, handling raw and fresh food separately to avoid cross contamination, and refrigerating perishable items prevent food poisoning. Hazards of Bioterrorism 1. Bioterrorism is the dissemination of harmful toxins, bacteria, viruses, and pathogens for the purpose of causing illness or death 2. Anthrax, variola, Clostridium botulism, and Yersinia pestis are examples of agents used by terrorists ERGONOMICS Factors or qualities in an object's design and/or use that contribute to comfort, safety, efficiency and ease of use. Good body mechanics Positioning and moving clients to promote safety for the client as well as for health care providers Mobility assessment Needed before attempting to move or position a client. Begin with ROM and progress as long as client tolerates. Include balance, gait and exercise Body Mechanics Proper use of muscles to maintain balance, posture, and body alignment when performing a physical task. Nurses use body mechanics when providing care to clients by lifting, bending, and carrying out the activities of daily living. Center of gravity 1) it is the center of mass 2) Weight is the quantity of matter acted on by force of gravity 3) To lift an object, the nurse must overcome the weight of the object and know the center of gravity of the object. 4) When the human body is in the upright position, the center of gravity is in the pelvis 5) When an individual moves, the center of gravity shifts 6) The closer the line of gravity is to the center of the base of support, the more stable the individual is. 7) The lower the center of gravity, bend the hips and knees LIFTING a. Use the major muscle groups to prevent back strain, and tighten the abdominal muscles to increase support to the back muscles b. Distribute the weight between the large muscles of the arms and legs to decrease the strain on any one muscle group and avoid strain on smaller muscles. c.. d. Use assistive devices whenever possible, and seek assistance whenever it is needed. When pushing or pulling a load a) Widen the base of support b) Pull objects toward the center of gravity rather then pushing away c) If pushing, move the front foot forward, and if pulling, move the rear leg back to promote stability d) Face the direction of movement when moving a client e) Use own body as counterweight when pushing or pulling to make the movement easier f) Sliding rolling and pushing require less energy than lifting and offer less risk for injury g) Avoid the thoracic spine and bending the back while hips and knees are straight. Guidelines to prevent injury 1) Plan ahead for activities that require lifting, transfer, or ambulation of client, and ask others to be ready to assist at the planned time 2) Be aware that the safest way to lift a client may be with the use of assistive equipment. 3) Rest between heavy activities to decrease muscle fatigue 4) Maintain good posture and exercise regularly to increase strength in arms, legs, back, and abdominal muscles, so these activities will require less energy. 5) Use smooth movements when lifting and moving clients to prevent injury through sudden or jerky muscle movements. 6) When standing for long periods of time, flex the hip and knee through use of a foot rest. When sitting for long periods of time, keep the knees slightly higher than the hips. 7) Avoid repetitive movements of the hands, wrists, and shoulders. Take a break every 15 to 20 minutes to flex and stretch joints and muscles. 8) Maintain good posture (head and neck in straight line with the pelvis) to avoid neck flexion and hunched shoulders, which can cause impingement of nerves in the neck. 9) Avoid twisting the spine or bending at the waist (flexion) to minimize the risk for injury. SEMI-FOWLER'S POSTION 1) Used to prevent regurgitation of tube feedings and aspiration in clients with difficulty swallowing 2) Supine with head of the bed elevated approximately 30 degrees and knees slightly elevated about 15 degrees FOWLER'S POSITION 1) Used during procedures such as NG tube insertion and suctioning. Also for better chest expansion and ventilation, as well as better dependent drainage after abdominal surgeries 2) Supine with head of bed elevated about 45 degrees, and knees slightly elevated about 15 degrees. HIGH-FOWLER'S POSITION 1) Promotes lung expansion by lowering the diaphragm and is used for clients experiencing severe dyspnea 2) Supine with head of bed elevated slightly 90 degrees and knees may or may not be elevated SUPINE OR DORSAL RECUMBENT 1) Ideal for patients with lower back problems 2) Lies on back with head and shoulders elevated on a pillow. Forearms may be placed on pillows or placed at side. A foot support prevents foot drop and maintains proper alignment. PRONE POSITION 1) Promotes drainage from the mouth for clients following throat or oral surgery, but inhibits chest expansion 2) Flat on abdomen with head to one side LATERAL OR SIDE-LYING POSITION 1) This is a good sleeping position, but the client must be turned regularly to prevent development of pressure ulcers on the dependent areas. A 30 degree lateral position is recommended for clients at risk for pressure ulcers. 2) Client lies on side with most of his weight on the dependent hip and shoulder. Arms should be flexed in front of the body. A pillow is placed under his head and neck, the upper arm, and under the leg and thigh to maintain body alignment SIM'S OR SEMI-PRONE POSITION 1) This is a comfortable sleeping position for many clients, and it promotes oral drainage. 2) Client is on his side halfway between lateral and prone positions. (Weight is on the anterior ileum, humerus and clavicle. Lower arm is behind the client while the upper arm is in front. Both legs are flexed, but the upper leg is flexed at a greater angle than the lower leg at the hip as well as the knee. OTRHOPNEIC POSITION 1) This position allows for chest expansion and is especially beneficial to clients with CPOD 2) Sits in the bed or at the bedside. Pillow is placed on the over-bed table, which is placed across the client's lap. Client rests his arms on the over-bed table . TRENDELENBURG POSITION 1) Used during postural drainage, facilitates venous return 2) Entire bed is tilted with the foot of the bed lower than the head of the bed. REVERSE TRENDELENBURG POSITION 1) Promotes gastric emptying and prevents esophageal reflux 2) Entire bed is tilted with the foot of the bed lower than the head of the bed. DISASTER Mass casualty or intra-facility event that overwhelms or interrupts at least temporarily the normal flow of services of a hospital. TWO TYPES OF DISASTER a) INTERNAL EMERGENCIES - loss of electric power, severe damage or casualties within the facility b) EXTERNAL EMERGENCIES - hurricanes, floods, volcano eruptions, terrorist acts, building collapse, safety and hazardous materials. Categories of Triage The principles of triage should be followed in health care institutions involved in a mass casualty event. Categories are separated in relation to their potential for survival, and treatment is allocated accordingly 1. Class I (Emergent Category) - Highest priority is give to clients who have life-threatening injuries but also have a high possibility of survival once they are stabilized 2. Class II (Urgent Category) Inhalational Anthrax SIGNS AND SYMPTOMS: sore throat, fever, muscle aches, severe dyspnea, meningitis, shock TREATMENT PREVENTION: IV ciprofloxacin (Cipro) Botulism SIGNS AND SYMPTOMS: difficulty swallowing, progressive weakness, nausea, vomiting, and abdominal cramps, difficult breathing TREATMENT PREVENTION: Airway management, antitoxin, elimination of toxin Smallpox SIGNS AND SYMPTOMS: High fever, fatigue, severe headache, rash (starts centrally and spreads outward) that turns to pus-filled lesions, vomiting, delirium, excessive bleeding TREATMENT PREVENTION: No CURE, SUPPORTIVE CARE: hydration, pain medication, antipyretics PREVENTION: Vaccine Ebola SIGNS AND SYMPTOMS: Sore throat, headache, high temperature, nausea, vomiting, diarrhea, internal and external bleeding, shock TREATMENT PREVENTION: No Cure SUPPORTIVE CARE: Minimize invasive procedure PREVENTION: Vaccine Color CODE Designation for Emergencies Newborn abduction: PINK Mass Casualty incident: BLUE Fire: RED Chemical spill:ORANGE Tornado: GRAY Risk factor Assessment 1) Genetics 2) Gender 3) Physiologic 4) Environmental factors 5) Lifestyle-risk behaviors 6) Age 7) Frequency Routine Physical FEMALE: Start at age 20, every 1 to 3 years, beginning at 40 annually MALE: Starting at age 20, every 5 years; beginning at 40 annually Dental assessments FEMALE and MALE: Every 6 months Blood Pressure FEMALE and MALE: Starting at age 20, each routine health care visit, minimum of every 2 years Body Mass Index FEMALE and MALE: Starting at age 20, each routine health care visit Blood cholesterol FEMALE and MALE: Starting at age 20, minimum of every 5 years Blood Glucose FEMALE and MALE: Starting at age 45, a minimum of every 3 years Colorectal Screening FEMALE and MALE: Fecal occult blood test annually starting at age 50; and flexible sigmoidoscopy every 5 years; or colonoscopy every 10 years; or double contrast barium enema every 5 years Colonoscopy FEMALE and MALE: Starting at age 50, every 1 to 10 years, depending on test used by provider Pap test FEMALE: Starting at age 21 (or earlier if sexually active), annually or every 2 years; After age 30, every 1 to 3 years, depending on test used by provider, and at provider's discretion. Clinical Breast Exam FEMALE: Starting at age 20, 3 years; Starting at age 40, yearly Mammogram FEMALE: Starting at age 40, yearly Clinical Testicular Exam MALE: Starting at age 20, every year Prostate Specific Antigen (PSA) Test and digital rectal exam MALE: Starting at age 50, as indicated by the provider PRIMARY PREVENTION Addresses needs of healthy clients to promote health and prevent disease with specific protections (immunization programs, child car seat education, nutrition and fitness activities, health education in schools) SECONDARY PREVENTION Focuses on early identification of individuals or communities experiencing illness, providing treatment, and conducting activities that are geared to prevent a worsening health status (communicable disease screening and case finding, early detection and treatment of diabetes, exercise programs for older adult clients who are frail) TERTIARY PREVENTION Aims to prevent the long term consequences of a chronic illness or disability and to support optimal functioning (prevention of pressure ulcers as a complication of spinal cord injury, promoting independence for the client who has traumatic brain injury) Domains of learning a) Cognitive - obtaining new information, being able to apply the information, and able to evaluate the information. b) Affective - involves feelings, beliefs and ideals. c) Psychomotor - learning how to complete a physical activity or motor skill FACTORS THAT ENHANCE LEARNING 1) Perceived benefit 2) Cognitive and physical ability 3) Health and cultural beliefs 4) Active participation 5) Age/educational level-appropriate methods BARRIERS TO LEARNING a) Fear, anxiety, depression b) Physical discomfort, pain, fatigue c) Environmental distractions d) Health and cultural beliefs e) Sensory and perceptual deficits f) Psychomotor deficits INFANT (Birth to 1 year) Physical development 2 - 3 months: posterior fontanel closes 12-18 months: anterior fontanel closes first 6 months: gains about 150 to 210 g of weight per month 4- 6 months: weight doubles End of year 1: Triples weight Infant (Birth to 1 year) Dentition 6-8 teeth erupts at the end of 1 year. Solving teething pain of an infant (0 to 1 yr) Cold teething rings, OTC gels, or acetaminophen (Tylenol) and/or ibuprofen (Advil). IBUPROFEN SHOULD ONLY BE GIVEN TO CHILDREN OVER 6 MONTHS. INFANT (0 - 12 months) GROSS MOTOR SKILLS 1 mo: demonstrates head lag 2 mo: lifts head off mattress 3 mo: raises head and shoulders off mattress 4 mo: rolls from back to side 5 mo: rolls from front to back 6 mo: rolls from back to front 7 mo: BEARS FULL WEIGHT ON FEET 8 mo: SITS UNSUPPORTED 9 mo: Pulls up to a standing position 10 mo: Changes from prone to sitting position 11 mo: Walks while holding on to something 12 mo: Sits down from a standing position without assistance INFANT (0 - 12 months) FINE MOTOR SKILLS 1 mo: has a present grasp reflex 2 mo: holds hands in an open position 3 mo: no longer has a grasp reflex; keeps hands loosely open 4 mo: places objects in mouth 5 mo: uses palmar grasp dominantly 6 mo: holds bottle 7 mo: moves object from hand to hand 8 mo: begins using pincer grasp 9 mo: has a crude pincer grasp 10 mo: grasps rattle by its handle 11 mo: can place objects into a container 12 mo: tries to build a two-block tower without success Piaget's Sensorimotor Stage (birth to 24 months) 1) Separation - when infants learn to separate themselves from other objects in the environment 2) Object permanence - at about 9 months, the process by which an infant knows that the object still exists when it is hidden from view 3) Mental representation in the recognition of symbols ERICKSON's "Trust versus Mistrust" (Infant's psychosocial development) 1) Infants trust that their feeding, comfort and caring needs will be met. 2) Social development is initially influenced by infant's reflexive behavior which includes attachment, separation, recognition, anxiety and stranger fear. 3) SEPARATION ANXIETY develops between 4 and 8 months of age. Stranger fear becomes evident between 6 to 8 months, when children are less likely to accept strangers. 4) Engages in solitary play Immunizations for the Infant (0 - 12 months) Birth - Hep B (Hepatitis B) 2 months - DTap (Diphtheria and tetanus toxoids and pertussis); RV (rotavirus vaccine); IPV (inactivated poliovirus); Hib (haemphilus influenzae type B); pneumococcal vaccine (PCV) and Hep B 4 months - Dtap, RV, IPV, HiB, PVC 6 months - Dtap, IPV (6 to 18 months), PVC, Hep B (6 to 12 months), RotaTeq (alternative for R, requires 3 doses completed by 32 weeks) 6 to 12 months - Seasonal influenza vaccination yearly. The trivalent inactivated influenza vaccine (TIV) is available as an IM injection NUTRITION for INFANTS (0 to 12 months old) 1) Breastfeeding is recommended for the first 6 months of life 2) Iron-fortified is an acceptable alternative to breast milk. Cow's milk is not recommended 3) Solids can be introduced between 4 and 6 months 4) Iron fortified rice cereal should be offered first. 5) New food should be introduced one at a time over a 5 to 7 day period. Observe allergies. Vegetables or fruits are first started between 6 and 8 months. Then meat can be added. 6) Milk, eggs, wheat, citrus fruits, peanuts, peanut butter and honey should be delayed until the first year of life. 7) By 9 months, table food that are chopped, cooked and unseasoned are appropriate 8) Iron-enriched food after 6 months TODDLER (1 TO 3 YEARS) Physical Growth 1) 18 months - anterior fontanel closes 2) 24 months - should weigh 4 times his birth weight 3) Height - grows by 7.5 cm (3 in.) per year Toddler (1 to 3 years) - Cognitive Development Piaget - Preoperational 1. Concept of object permanence fully developed 2. Have and demonstrate memories of events that relate to them 3. Domestic mimicry is evident 4. Preoperational thought does not allow toddlers to understand other viewpoints, but it does allow them to symbolize objects and people in order to imitate activities seen previously. Toddler (1 to 3 years) - Psychosocial Development Erikson - Autonomy versus shame and doubt 1. Independence is paramount for the toddler who is attempting to do everything for himself. 2. Separation anxiety continues to occur when a parent leaves the child. 3. Engages in parallel play Toddler (1 to 3 years) - Moral Development 1. Closely associated with cognitive development 2. Toddlers are unable to see another's perspective; they can only view things from their point of view. 3. The toddler's punishment and obedience orientation begins with a sense that good behavior is rewarded and bad behavior is punished. Preschooler (3 to 6 years) - Physical Growth The preschooler's body evolves away from the characteristically unsteady wide stance and protruding abdomen of the toddler to the more graceful, posturally erect, and sturdy physicality of this age group Preschoolers (3 to 6 years) - Cognitive Development Piaget - Still in preoperational phase of cognitive development. They participate in preconceptual thought (2 to 4 years old) and intuitive thought (from 4 to 7 years of age). Preschoolers make judgment based on visual appearances. Misconceptions in thinking during this age include: a) Artificialism - everything is made by humans b) Animism - inanimate objects are alive c) Imminent justice - a universal code exists that determines law and order. *Intuitive thought - preschoolers can classify information and can become aware of cause and effect relationship. Preschoolers (3 to 6 years) - Psychosocial Erikson - Initiative versus Guilt The preschooler may take on many new experiences despite not having all of the physical abilities necessary to be successful at everything. Guild may occur when children are unable to accomplish a task and believed they have misbehaved. Guiding preschoolers to attempt activities within their capabilities while setting limits is appropriate . Parallel play shifts to associative play during these years. Play is not highly organized, but cooperation does exist between children. Preschoolers (3 to 6 years) - Moral Development Continue in the good-bad orientation of the toddler but begins to understand behaviors in terms of what is socially acceptable. Preschoolers (3 to 6 years) - Health Promotion Vision screening is routinely done in the preschool population as part of the pre-kindergarten physical exam. Visual impairments such as myopia and amblyopia can be detected and treated before poor visual acuity impairs the learning environment. School age Child (6 to 12 years) - Physical changes 1) Changes related to puberty begin to appear in females 2) Enlargement of testicles with changes in the scrotum for males 3) Appearance of pubic hair 4) Permanent Teeth Erupt 5) Visual acuity improves to 20/20 School age Child (6 to 12 years) - Cognitive Development Piaget - Described as concrete operations 1) Sees weight and volume as unchanging 2) Understands simple analogies 3) Understands the concept of time (days, seasons) 4) Classifies more complex information 5) Understands various emotions people experience 6) Becomes self-motivated 7) Able to solve problems 8) Understands that a word may have other meanings. School age Child (6 to 12 years) - Psychosocial Development Erikson - industry versus inferiority a. Sense of industry is achieved through advances in learning b. Motivated by tasks that increase self-worth c. Fear of ridicule by peers and teachers over school-related issues are common. Some children manifest nervous behaviors to deal with the stress such as nail biting School age Child (6 to 12 years) - Social Development 1) Peer groups play an important part in social development. However, peer pressure begins to take effect. 2) Friendships begin to form between same gender peers. This is the time period when clubs and best friends are popular 3) Children at this age prefer the company of same gender companions School age Child (6 to 12 years) - Health Screenings School-age children should be screened for scoliosis by examining for a lateral curvature of the spine before and during growth spurts. Screening may take place at school or a doctor's clinic. School age Child (6 to 12 years) - Nutrition Obesity is an increasing concern of this age group that predisposes them to low self-esteem, diabetes, heart disease, and high blood pressure. Adolescent (12 to 20 years) - Physical Development a) The final 20% to 25% of height is achieved during puberty b) Girls may cease to grow about 3 to 2.5 years after onset of menarche c) Boys tend to stop growing at around 18 to 20 years old Adolescent (12 to 20 years) - Cognitive Development Piaget - Formal Operations a) Capable of thinking as an adult b) Able to think abstractly and can deal with principles c) Able to evaluate the quality of one's thinking d) Future oriented e) Makes decisions through logical operations Adolescent (12 to 20 years) - Psychosocial Development Erikson - Identity versus Role Confusion a) Adolescent develops a sense of personal identity influenced by family expectations. b) Group identity - adolescent may be a part of a peer group that influences behavior. c) Health perception - may view themselves as invincible to bad outcomes of risky behavior Adolescent (12 to 20 years) - Nutrition Rapid growth and high metabolism require increases in quality nutrients. Nutrients that tend to be deficient during this stage of life are iron, calcium, and Vitamin A and C. Young Adult (20 to 35 years) - Physical Development 1. Growth has concluded at around age 20. 2. Cardiac output efficiency and Physical senses peak 3. Muscles function optimally at ages 25 to 30 4. Metabolic rate decreases 2% to 4% every decade after age 20. 5. Libido is high for men 6. Libido for women peaks during the later part of this stage 7. Time for childbearing is optimal Young Adult (20 to 35 years) - Cognitive Development Piaget - Formal Operations a) The young adult years are optimal time for education, formal and informal b) Critical thinking improves c) Memory peaks in the 20's d) Increased ability for creative thought e) Values/norms of friends (social groups) are relevant Young Adult (20 to 35 years) - Psychosocial Development Erikson - Intimacy versus Isolation 1. Young adults may take on more adult commitments and responsibilities 2. Young adults may make occupational choices characterized by high goals, dreams, and exploration and experimentation Middle Adult (35 to 65 years) - Physical Development Middle adults typically decrease in: a. skin turgor and moisture, large intestine muscle tone b. subcutaneous fat, gastric sensations c. melanin in hair (graying), estrogen/testosterone d. hair, glucose tolerance e. visual acuity f. auditory acuity, respiratory vital capacity g. sense of taste, ,vessel elasticity h. skeletal muscle mass, height, calcium bone density Middle Adult (35 to 65 years) - Cognitive Development Piaget - Formal Operations 1) Reaction time/speed of performance slows slightly 2) Memory is intact 3) Crystallized intelligence remains stored knowledge 4) Fluid intelligence declines slightly Middle Adult (35 to 65 years) - Psychosocial Development Erikson - Generativity versus stagnation Older Adult (65 years and older) - Physical Development 1) Decrease in almost everything 2) Prostate hypertrophy in men 3) Decline in estrogen/testosterone production 4) Decreased sensitivity of tissue cells to insulin 5) Atrophy of breast tissue in women Older Adult (65 years and older) - Cognitive Development Piaget - Formal operations Slowed neurotransmission, impaired vascular circulation, disease states, poor nutrition, and structural brain changes can result in the following cognitive disorders: a) Delirium - often first symptom of infection (UTI) in older adults b) Dementia - Chronic, progressive and possible with an unknown cause (Alzheimer's) c) Depression - Chronic, acute, or gradual onset (at least 6 weeks) Older Adult (65 years and older) - Psychosocial Development Erikson - Integrity versus Despair Older Adult (65 years and older) - Nutrition Metabolic rates and activity decline as individuals age; therefore, total caloric intake should decrease to maintain a healthy weight. With the reduction of total calorie intake, it becomes even more important that calories consumed be of good nutritional value. THERAPEUTIC COMMUNICATION TECHNIQUES 1) ACTIVE LISTENING - Shows clients that they have your undivided attention 2) OPEN-ENDED QUESTIONS - Used initially to encourage clients to tell their story in their own way. Ask questions in a language that a client can understand 3) CLARIFYING - Questioning clients about specific details in greater depth or directing them toward relevant parts of the history. 4) SUMMARIZING - Validates the accuracy of the story. Therapeutic Communication Helps nurses develop a rapport with clients. The techniques encourage a trusting relationship, whereby clients feel comfortable telling their stories. Nurses introduce the purpose of the interview, gather information and then conclude the interview by summarizing the findings. Components of the health history 1) Demographic information 2) Source of history 3) Chief concern 4) History of present illness 5) Past health history and current health status 6) Family history 7) Social history 8) Health promotion behaviors Review of systems Ascertains information about the functioning of all body systems. 1) Integumentary 2) Head and neck 3) Eyes 4) Ears, nose, mouth and throat 5) Breasts 6) Respiratory 7) Cardiovascular 8) Gastrointestinal 9) Genitourinary 10) Musculoskeletal 11) Neurological 12) Mental Health 13) Endocrine 14) Allergic/immunologic Normal Order of Physical Assessment 1) Inspect 2) Palpate 3) Percuss 4) Auscultate Assessment order of the abdomen 1) Inspect 2) Auscultate 3) Percuss 4) Palpate Inspection 1. First step of assessment, first interaction 2. Uses penlight, otoscope, ophthalmoscope 3. Involves the sense of vision, smell and hearing Palpation 1. Touching to determine the size, consistency, texture, temperature, location, and tenderness of an organ or body part. Palpate tender areas last. 2. Light palpation (less than 1 cm) required for most body surfaces. 3. Deeper palpation is used to assess abdominal organs or masses Part of the hands used for different sensations 1. dorsal surface - most sensitive to temperature 2. ulnar surface and base of fingers - sensitive to vibration 3. fingertips - sensitive to pulsation, position, texture, size and consistency 4.fingers and thumb - used to grab an organ or mass. Percussion Involves tapping body parts with fingers, fists, or small instruments to evaluate size, location, tenderness, and presence or absence of fluid or air in body organs, and to detect any abnormalities Techniques for percussion a) Direct percussion - involves striking the body to elicit sounds b) Indirect percussion - involves placing a hand flatly on the body, as the striking surface for sound production c) First percussion - used to assess for tenderness over the kidneys, liver and gallbladder. Auscultation 1. Used to listen to sounds produced by the body. Some sounds are loud enough to be heard unaided, but most sounds require a stethoscope or a Doppler technique. 2. The DIAPHRAGM of the stethoscope is used to listen to high-pitched sounds (normal heart sounds, bowel sounds, breath sounds) 3. The BELL is used to listen to low pitched sounds (abnormal heart sounds, bruit). The bell should be placed lightly on the body part being examined. General SURVEY A written summary of the impression of the client's overall health. The nurse gathers this information from the first encounter with the client and continues to make observations throughout the assessment process. a) Physical appearance b) Body structure c) Mobility d) Behavior e) Vital signs Vital Signs Measurement of the body's most basic functions. Provide health care staff with information that they need. In most facilities, pain and oxygen saturation are also considered vital signs and may also be measured depending on the reason the client needs health. Temperature 1) Reflects balance between heat produced and heat lost. 2) Disease or trauma of the hypothalamus or spinal cord will alter temperature control. 3) The rectum, tympanic membrane and urinary bladder are core temperature measurement sites Pulse Measurement of heart rate and rhythm. Pulse corresponds to the bounding of blood flowing through various points in the circulatory system. PARASYMPATHETIC NS - lowers the heart rate SYMPATHETIC NS - raises the heart rate Respiration Body's mechanism for exchanging oxygen and carbon dioxide between the atmosphere and the cells of the body which is accomplished through breathing and recorded as the number of breaths per minute Blood Pressure Reflects the force the blood exerts against the walls of the arteries during contraction (systole) and relaxation (diastole) of the heart. SYSTOLIC - occurs during the VENTRICULAR SYSTOLE of the heart, when ventricles force blood into the aorta, and represents the maximum amount of pressure exerted on the arteries. DIASTOLIC - occurs during VENTRICULAR DIASTOLE of the heart, when the ventricles relax and exert minimal pressure against arterial walls, and represents the minimum amount of pressure exerted on the arteries. Temperature Ranges 1) Oral - 96.8 to 100.4 F (36 to 38 C). Average is 98.6 F (37 C) 2) Rectal - 0.9 F (0.5 C) higher than oral temperature 3) Axillary and Tympanic - usually 0.9 F (0.5 C) lower than oral temperatures 4) Temporal are close to rectal, but they are nearly 1 F (0.5 C) higher than oral and 2 F (1 C) higher than axillary temperatures. Heat production of the body Results from: a) increases in basal metabolic rate, b) muscle activity c) thyroxine output d) sympathetic stimulation (increases heat production) Heat loss from the body Occurs through a) Conduction - Transfer of heat from body to another surface (body immersed in cold water) b) Convection - Dispersion of heat by air currents (wind blowing across exposes skin) c) Evaporation - Dispersion of heat through water vapor (sweating and diaphoresis) d) Radiation - Transfer of heat from one object to another object without contact between them (heat loss from the body to a cold room) Temperature of newborns, older adults and hormonal changes 1. Newborns have a large surface-to-mass ratio; therefore they lose heat rapidly to the environment. Newborn Temperatures should be at 97.7 F and 99.5 F (36.5 C - 37.5 C) 2. Older adults loss of subcutaneous fat results in lower body temperatures. Their average temperature is 96.8 F (36 C). They are most likely to be affected by extreme environmental changes (heat stroke, hypothermia). It takes long for body temperature to register due to changes in temperature regulation. 3. Temperature rises with ovulation and menses. With menopause, body temperature may increase by up to 4 C (7.2 F) Temperature: Exercise, illness and food or fluid intake a) Exercise and dehydration can lead to hyperthermia b) Illness and injury associated with inflammation can elevate temperature. c) Food or fluid and SMOKING can interfere with accurate temperature readings. Wait for 20 to 30 minutes before measuring temperature. Oral temperature technique a) Please under the tongue in the posterior sublingual pocket lateral to the center of the lower jaw b) Hold until sound is heard. If mercury, hold for 3 minutes c) DO not use glass, mercury filled device for small children or patients who are confused. d) AGE specific - preferred method for children 4 years old and older e) May not be appropriate for clients who breath through their mouth or have trauma to face or mouth. Rectal temperature technique 1. Provide privacy 2. Assist client to SIM'S POSITION with upper leg flexed. 3. Ask client to breath slowly, relax when placing a lubricated thermometer (with a rectal probe) to the anus in the direction of the umbilicus (1 and 1/2 inch for the adult or 3.5 cm) . With resistance, remove immediately. 4. Do not use in clients with bleeding disorders or have rectal disorders. 5. Age specific. More accurate that axillary. Do NOT USE FOR INFANTS 3 months or younger. Use axillary temperature initially 6. Rectal site is used as a second measurement if temperature is above 99 F or 37.2 C 7. Stool in rectum can cause inaccurate readings Tympanic temperature technique 1) Pull the ear up and back for adult. Pull ear down and back for child younger than 3 years old 2) Place thermometer snugly in the outer ear canal. 3) Do not use for infants 3 months or younger. Axillary is preferred. 4) Excess earwax can affect accuracy of reading AXILLARY thermometer American Academy of Pediatrics recommend this use for infants 3 months old or younger (inaccurate reading) Complications on Temperature a) Fever is not harmful unless it exceeds 102.2 F or 39 C b) Hyperthermia is abnormally elevated temperature c) Hypothermia is below 35 C or 95 F. Interventions for Hyperthermia (more than 102 F) 1. Obtain specimen for blood cultures if ordered. 2. Assess/monitor WBC, sedimentation rates and electrolytes as ordered. 3. Administer antibiotics if prescribed, only after obtaining specimens for blood culture 4. Provide fluids and rest 5. Provide antipyretic, Aspirin, acetaminophen (Tylenol), ibuprofen (Advil). 6. Aspirin is not recommended for children and adolescents ho may have a viral illness (influenza, chickenpox) because of the risk of REYE'S SYNDROME. 7. Pediatric antipyretic dosage depends on weight. 8. Prevent shivering as this increases energy demand. 9. Offer blankets during chills and remove then when patient feels warm. Cooling blanket may be offered. 10. Provide oral hygiene, dry clothing and linens 11. Keep head covered. 12 Maintain environment at 70 to 80 F or 21 and 27 C Interventions for Hypothermia (below 95 F) a) Provide warm environmental temperature, heated humidified oxygen, warming blanket, warmed IV fluids b) Provide cardiac monitoring c) Have emergency resuscitation equipment on standby Physiology of the PULSEE 1. Rate - number of times per minute pulse is heard or felt 2. Rhythm - regularity at which each impulse is felt. A premature or late heartbeat can result in an irregular interval in which impulses are felt or heart and can indicate abnormal electrical activity of the heart. 3. Strength (amplitude) - Should be the same from beat to beat and can be graded on a scale from 0 to 4. Amplitude (strength) scale of the PULSE 0 = absent, unable to palpate 1+ = diminished, weaker than expected 2+ = brisk, expected 3+ = increased 4+ = full volume, bounding Range of Adult Pulse 60 to 100 / min at rest Abnormal pulse rates 1. Tachycardia - rate above 100 / min 2. Bradycardia - rate below 60 / min 3. Dysrhythmias - irregular heart rhythm noted as irregular RADIAL pulse 4. PULSE DEFICIT - apical rate faster than the radial rate. With dysrhythmias, heart rate may contract ineffectively, resulting in a beat heard with the APICAL SITE WITH NOT PULSATION felt at the radial pulse point Pulse Rates and Age a) Infants - 120 to 160 / minute. Gradually decreases as child grows older. b) Average pulse for a 12 to 14 is 80 to 90 / min. c) Older adults may be weak due to poor circulation; more difficult to palpate FACTORS LEADING TO TACHYCARDIA 1. Exercise and Fever 2. Medication - epinephrine, levothyroxine (Synthroid), beta2 - andrenergic agonists (albuterol [Proventil]) 3. Change positions from lying to sitting or standing 4. Acute pain 5. Hyperthyroidism 6. Anemia, hypoxemia 7. Stress, anxiety and fear 8. Hypovolemia, shock and FACTORS LEADING TO BRADYCARDIA 1. Long term physical fitness 2. Hypothermia 3. Medications - Digoxin (Lanoxin), beta-blockers (propranolol [Inderal]), calcium channel blockers (verapamil [Calan]) Location of the RADIAL PULSE Located on the thumb-side of the forearm of the wrist. a) Place index and middle finger of one hand gently but firmly over the pulse. Assess pulsation for rate, rhythm, amplitude and quality. b) If regular, count 30 seconds and multiply by 2. If irregular count for a full minute, then compare with Apical pulse Location of APICAL PULSE 1) Fifth intercostal space at the left midclavicular line. Use this SITE for assessing heart rate of INFANT, RAPID RATES (FASTER THAN 100/min), irregular rhythms, and rates prior to the administration of cardiac medication. 2) Place stethoscope on the chest at the fifth intercostal space at the left midclavicular line. Always count an apical pulse for 1 minute. RESPIRATION: Physiologic Responses Chemoreceptors in the carotid arteries and the aorta primarily monitor Carbon Dioxide (CO2) levels of the blood. If Carbon Dioxide rises, the respiratory rate rids the body of excess CO2. For clients with COPD, a low oxygen level becomes the primary respiratory drive. Process of respiration a. VENTILATION - exchange of Oxygen and carbon dioxide in the lungs. Measure ventilation with the respiratory rate, rhythm, and depth b. DIFFUSION - exchange of Oxygen and Carbon Dioxide between the alveoli and the red blood cells. Measure diffusion with pulse oximetry. c. PERFUSION - Flow of blood to and from the pulmonary capillaries. Measure perfusion with pulse oximetry. ACCURATE ASSESSMENT OF RESPIRATION Involves observing the rate, the depth and rhythm of chest wall movement during inspiration and expiration. DO NOT INFORM CLIENT that you are measuring respirations. a) Rate - full inspirations and expirations in 1 minute. Observe number of times chest rises and falls. Expected range for adults is 12 to 20 breathes / min. b) Depth - chest wall expansion with each breath. Altered breaths are deep or shallow. c) Rhythm - breathing intervals. A regular rhythm with an occasional sigh is expected in adults PULSE OXIMETRY Noninvasive, indirect measurement of the oxygen saturation (SaO2) of the blood. Range is 95% to 100% although some acceptable levels are at 91% to 100%. Some illness states may even allow for SaO2 at 85% to 89%. Less than 85% is not normal. FACTORS OF RESPIRATION 1. Age: decreases with age. Newborns have rates of 30 to 60 / minute. School age children have rates of 20 to 30 / minute. 2. Gender: Men are diaphragmatic breathers and abdominal movements are more noticeable. Women use more thoracic muscles, chest muscle movements are more pronounced when they breathe. 3. PAIN in the chest wall: May decrease depth of respiration. At the onset of ACUTE pain, respiration rate increases but stabilizes over time. 4. ANXIETY - increases rate and depth of respiration 5. SMOKING - causes resting rate of respiration to increase. 6. BODY POSITION - upright positions allow chest wall to expand more fully. 7. Medications such as opioid, sedatives, bronchodilators, and general anesthetics decrease the respiratory rate and depth. Respiratory depression can be a serious adverse effect. Amphetamines and cocaine increase rate and depth. 8. Neurological injury to the brainstem decreases respiratory rate and rhythm. 9. Illnesses affecting the shape of the chest wall, changing the patency of passages, impairing muscle function, and diminish respiratory effort. The use of accessory muscles and respiratory rate increases. 10. Impaired oxygen-carrying capacity of the blood that occurs with anemia or at high altitudes results in increases in the respiratory rate and alterations in rhythm to compensate. Respiration assessment Count a regular rate for 30 seconds and multiply by 2. Count the rate for 1 minute if irregular, faster than 20 / min or slower than 12 / min. Note depth (shallow, normal or deep) and rhythm (regular or irregular) Complications of Respiratory Intervention Hypoxemia - SaO2 below 90% 1. confirm sensor probe is working 2. correlate pulse rate on the oximeter with radial or apical pulse 3. confirm O2 delivery system is functioning and client is receiving required O2. 4. Place client on Fowler or semi-Fowler for ventilation 5. Encourage deep breathing. 6. Suction if needed and as ordered 7. Assess for hypoxemia, tachypnea, tachycardia, restlessness, anxiety, cyanosis 9. check for hyperthermia 10. assess vital signs, stay with patient to decrease anxiety BLOOD PRESSURE: Physiological responses The principal determinants of blood pressure are CARDIAC OUTPUT (CO) and SYSTEMIC VASCULAR RESISTANCE (SVR) BP = CO x SVR CARDIAC OUTPUT is determined by: 1. Hear rate 2. Contractility 3. Blood Volume 4. Venous Return * Increase in any of these, increase CO and BP *Decrease in any of these, decrease CO and BP SYSTEMIC VASCULAR RESISTANCE Systemic peripheral vascular resistance (SVR) is determined by the AMOUNT OF CONSTRICTION OR DILATION OF THE ARTERIES * Increase in SVR, increase BP * Decrease in SVR, decrease BP Blood Pressure Classification Systolic = SBP Diastolic = DBP 1.Normal is less than 120 (systolic) and less than 80 (diastolic) 2. PREHYPERTENSION is 120-139 (systolic) and 80 to 89 (diastolic) 3. STAGE 1 HYPERTENSION is 140-159 (systolic) and 90 to 99 (diastolic) 4. STAGE 2 HYPERTENSION is more than 160 (systolic) and more than 100 (diastolic) When is a diagnosis of hypertension done? A diagnosis of hypertension is made if the readings are elevated on at least three separate occasions. PULSE PRESSURE Difference between systolic and diastolic readings HYPOTENSION A BP that is below the normal (systolic is less than 90 mm Hg) and can be a result of fluid depletion, heart failure or vasodilation POSTURAL [ORTHOSTATIC] HYPOTENSION A BP that falls when a client changes position from lying to sitting or standing. The client is experiencing orthostatic hypotension if the SBP decreases more than 20 mmHg and the DPH decreases more than 10 mmHg with a 10% to 20% increase in heart rate. It may result from various causes 1. peripheral vasodilation 2. medication side effects 3. fluid depletion 4. anemia 5. prolonged bed rest. BLOOD PRESSURE AND AGE a) Infants have a low BP that gradually increases with age b) Older children and teenagers vary BP based on body size. Larger children have higher BP c) Adult BP tends to increase with age d) Older adults may have a slightly elevated SBP due to decreased elasticity of blood vessels. Other FACTORS that AFFECT BLOOD PRESSUR 1. Circadian rhythm - BP usually lowest in the early morning hour and peak during the later part of the afternoon or evening. 2. Stress associated with fear, emotional strain, and ACUTE PAIN can increase BP. 3. Ethnicity - African Americans have a higher incidence of hypertension in general and at earlier ages. 4. GENDER - adolescent to middle-age men have higher BP's than their female counterparts. Postmenopausal women have higher BP's than their male counterpart. 5. Medications such as opiates, antihypertensive, and cardiac medication can lower BP. Cocaine, smoking, cold medications, oral contraceptives, and antidepressants can raise BP. 6. Exercise can decrease BP; obesity can increase BP Equipment for Blood Pressure assessment Sphygmomanometer with a pressure manometer (aneroid or mercury) and an appropriately sized cuff. The width of the cuff should be more than 40% of the arm circumference at the point where the cuff is wrapped. The bladder (inside the cuff) should surround 80% of the arm circumference of an adult and the whole arm for a child. Cuffs that are too large give a false low reading and cuffs that are too small give a falsely high reading. Procedures to assess the Blood Pressure using the Ausculatory Method. Client instructions 1. Not smoke or drink any caffeine for 30 minutes prior to measurement 2. Rest for 5 minutes before measurement 3. Sit in a chair, with the feet flat on the floor, the back and arm supported and the arm at heart level. Procedures to assess the Blood Pressure using the Ausculatory Method. What the nurse should do? 1. Use properly calibrated and validated instruments 2. Use appropriate cuff size 3. Not measure BP in an arm with an IV infusion in progress or on the side where a mastectomy was performed or an arteriovenous shunt or fistula is present. 4. Average 2 or more readings, taken at least 2 minutes apart. If they differ more than 5 mmHg, obtain additional readings and average them. 5. After initial reading, measure BP with client standing Procedures to assess the Blood Pressure using the Ausculatory Method. Step by Step 1. Apply cuff 2 cm above the antecubital space with the brachial artery in line with the marking on the cuff. 2. Use lower extremity if the brachial artery is not accessible 3. Estimate systolic pressure by palpating the radial pulse and inflating the cuff until the pulse disappears. Inflate the cuff another 30 mmHg and slowly release the pressure to note when the pulse is palpable again (estimated systolic pressure) 4. Deflate the cuff and wait 1 minute 5. Position stethoscope over the brachial artery 6. Close pressure bulb by turning bulb clockwise until tight. 7. Quickly inflate cuff to 30 mmHg above the palpated systolic pressure. 8. Release pressure no faster than 2 to 3 mmHg per second 9. The level at which the first clear sounds are hears is the SYSTOLIC PRESSURE 10. Continue to deflate the cuff until the sounds muffle and disappear and note the DIASTOLIC PRESSURE. 11. Record Systolic over Diastolic (110/70 mmHg) Unexpected BP Readings 1. It is helpful to measure BP again toward the end of an interview with the client. Earlier pressures may be higher due to the stress of the clinical setting 2. The cuff must be completely deflated between attempts. Wait at least 1 full minute before reinflating the cuff. Air trapped in the bladder can cause a falsely high reading. Complications of Blood Pressure and Nursing Interventions: Orthostatic (Postural) Hypotension 1. Assess for other related symptoms such as dizziness, weakness, and fainting. 2. Instruct client to activate call light and not get out of bed without assistance. 3. Have client sit at edge of bed for at least 1 minute before standing up. Assist with ambulation. Home health considerations: a) Warn of lightheadedness and dizziness b) Advise client to sit or lie down when symptoms occur c) Suggest client to get up slowly when lying or sitting and avoid sudden changes in position Complications of Blood Pressure and Nursing Interventions: HYERTENSION 1. Assess for other symptoms (tachycardia, bradycardia, pain, anxiety). Primary hypertension is usually asymptomatic 2. Assess for identifiable causes of hypertension (renal disease, thyroid disease, medication) 3. Administer medication as prescribed 4. Assess for risk factors 5. Encourage lifestyle modification HYPERTENSION: Lifestyle modification 1. Smoking cessation 2. Dietary modification: DASH (Dietary Approaches to Stop Hypertension) Diet - restrict sodium; consume adequate potassium, calcium, and magnesium. These minerals help lower BP; Restrict cholesterol and saturated fat intake 3. Weight control 4. Modification of alcohol intake 5. Stress reduction 6. Encourage follow up with physician. Head and Neck Assessments a) CN V (trigeminal) - Assess the face for strength and sensation b) CN VII (facial) - Assess the face for symmetrical movement c) CN XI (spinal accessory) - Assess the shoulders for strength Abnormal findings for head and neck assessments 1. Decreased palpation of a mass 2. Range of motion of the neck 3. Enlarged lymph nodes Assessing CN V (trigeminal) - face for strength and sensation MOTOR : Test the strength of the muscle contraction by asking the client to clench her teeth while palpating the masseter and temporal muscles, and then the temporomandibular joint. TMJ movement should be smooth SENSORY: Test light touch by having the client close her eyes while gently touching her face with a cotton swab and asking her to tell you when she feels the touch. Assessing CN VII (facial) - face for symmetrical movement MOTOR: Test facial movement by having the client smile, frown, puff out cheeks, raise eyebrows, close eyes tightly, and show teeth Assessing the NECK - muscles should be summetrical - shoulders should be equal in height and with normal muscle mass - ROM - the client should be able to move his head smoothly and without distress in the following directions: a) Chin to chest (flexion) b) Ear to shoulder bilaterally (lateral flexion) c) Chin up (hyperextension) Assessing CN XI (shoulder accessories) Place hands on client's shoulders and ask client to shrug the shoulders against resistance. Lymph Nodes - extend from lower half of the head down to the neck. Should be palpated for enlargement 1. Occipital lymph node - base of the skull 2. Preauricular lymph node - in front of the ear 3. Postauricular lymph node - over the mastoid 4. Submandibular lymph node - along the base of the mandible 5. Tonsillar (retropharyngeal) lymph node - angle of the mandible 6. Submental lymph node - midline under the chin 7. Anterior cervical lymph nodes - along the sternocleidomastoid muscle 8. Posterior cervical lymph node - posterior to the sternocleidomastoid 9. Supraclavicular nodes - above the clavicles Examining the thyroid gland - bilobed 1. Inspect lower half of client's neck to see if an enlargement of the gland is visible. A normal thyroid gland is not visible 2. Having the client take a sip of water, watch the thyroid tissue move up 3. Approaching the client from behind and having the client tip her head forward and to the right. Use the left hand to displace the trachea slightly to the right while placing your fingers between the sternomastoid muscle and the trachea. 4. Instructing the client to take a sip of water, and feeling for the movement of the thyroid gland as it moves up with the trachea and larynx. 5. Palpage for size, masses and smoothness. Auscultation of the thyroid If thyroid is enlarged, auscultate gland using a stethoscope. The presence of a bruit indicates an abnormal increase in blood flow to the area. Assessment of the eyes: Abnormal findings 1. Loss of visual fields 2. Asymmetric corneal light reflex 3. Periorbital edema 4. Conjunctivitis 5. Corneal abrasion Assessment of the eyes: Tests and Equipment 1. Visual Acuity - distant vision - Snellen and Rosenbaum charts, eye cover, and Ishihara test; near vision - hand held card 2. Extraocular movements (EOMs) - Penlight or ophthalmoscope light, eye cover 3. Visual fields - eye cover 4. External structures - Penlight or ophthalmoscope light; gloves 5. Internal structures - ophthalmoscope Assessment for visual acuity - CN II 1. Use Snellen chart with patient 20 feet from the chart. Snellen chart may be used for patients who cannot read. 2. Evaluate both eyes and then each eye separately with or without correction. 3. Ask client to read the smallest line of print visible 4. The line for which two or fewer letters are missed is recorded as the visual acuity (20/20 is normal) 5. The first number indicates the number of feet from the chart the client is standing. And the second number is the distance at which a normal sighted person can read the line. Screen for MYOPIA impaired FAR vision, using Snellen chart Screen for PRESBYOPIA impaired near vision or farsightedness, using the ROSENBAUM eye chart held 14 inches from client's face. Readings correlate with the Snellen Chart Assess for COLOR VISION using the ISHIHARA test. Client should identify various shaded shapes Assess Extraocular movements (EOMs) Determines coordination of the eye muscle using three different tests (CN III, CN IV, CN VI - 3, 4, 6) Test CORNEAL LIGHT REFLEX by directing a light into the client's eye and looking to see if the reflection is seen symmetrically on the corneas Screen for STRABISMUS with the COVER/UNCOVER test. While covering one eye, client looks in another direction. Cover is removed and both eyes should be gazing in the same direction Six cardinal positions of gaze require client to follow your finger with his eyes without moving his head. Move your finger in a wide "H" pattern about 20 to 25 cm from the client's eyes. Eye movements should be smooth and symmetrical with no jerky tremor like movements (NYSTAGMUS) Evaluate Visual Field (CN II) by facing client at distance of 60 cm.. Client covers one while you cover your direct opposite eye (client's right eye and your left eye) are covered. Ask client to look at you and report when she can see the fingers on your outstretched arm coming in from four directions (up, down, temporally, nasally). Expected finding is that the client should see your fingers in the same way that you do. Exophthalmos Bulging eyes Strabismus Crossing eyes Expected External Structures of the eye 1. Parallel without bulging 2 Eyebrows should be symmetrical and evenly distributed from inner canthus 3. No edema or redness on the lacrimal gland 4. Conjunctiva: palpebral is pink, bulbar is transparent 5. Sclera should be white, light yellow in dark-skinned clients 6. Corneas are clear 7. Lenses are clear (cloudiness indicates cataracts) PTOSIS Covering of the pupil by the upper eyelid PERRLA (CN II, CN III) P - Pupils should be clear E - Equal in size and between 3 to 5 cm R - Round in shape R - Reactive to light both directly and consensually when a light is directed into one pupil and then the other L - Light A - Accommodation of the pupils when they dilate to look at an object far away and then CONVERGE and CONSTRICT to FOCUS on a near object. Expected condition of IRIS Should be round and illuminate fully when a light is shined across from the side. A partially illuminated iris indicates glaucoma. Expected internal eye structures 1. Optic disk is light pink or more yellow than the surrounding retina. 2. Retina should be without lesions, and color will be dark pink in those with a dark complexion and light pink in fair-skinned clients. 3. Arteries and veins are found in a 2-to-3 ratio and without nicking 4. Macula may not be readily visible without pupil dilation but may be briefly glimpsed when the client looks directly at the light. 5. Clear fluid tears without discharge. Assessment of ears, nose, mouth and throat a) Incudes external, middle and inner ear. Evaluates hearing, nose and sinuses, mouth and throat b) Abnormal findings include: Otitis externa, osteoma, polyp , retracted drum, decreased hearing acuity, and lateralization Assess ears for hearing CN VIII - acoustic Assess nose for smell CN I - olfactory Assess mouth for taste CN VII - facial CN IX - glossopharyngeal Assess tongue for movement and strength CN XII - hypoglossal Assess mouth for movement of soft palate and gag reflex CN IX - glossopharyngeal Assess swallowing and speech CN X - vagus Expected findings for external ear 1. Alignment - top of auricles should meet an imaginary horizontal line that extends from the outer canthus of the eye. 2. Ear color should be the same as face color 3. Lesions and tenderness in ears are unexpected 4. Ear canal should be free of foreign bodies or discharge 5. Cerumen is expected finding Assessment of the INTERNAL EAR Straighten the ear canal by pulling the auricle up and back for adults and older children, and down and back for younger children. Using otoscope, insert speculum 1 to 1.5 cm following but not touching the ear canal to visualize a) tympanic membrane that are pearly gray and intact, free from tears. b) a light reflex that is visible and in a well defined cone shape c) umbo and manubrium land marks that are readily visible d) ear canals that are pink with fine hairs Auditory Screening Test: WHISPER TEST CNV III TECHNIQUE: One ear is occluded and the other tested to see if the client can hear whispered sounds without seeing mouth move EXPECTED FINDINGS: Client can hear you softly whisper 30 to 60 cm away Auditory Screening Test: RINNE TEST TECHNIQUE: (1) Place a vibrating tuning fork firmly against the mastoid bone and not the time; (2) Have client state when he can no longer hear the sound, note the time, and then move the tuning fork in front of the ear canal. When client can no longer hear the tuning fork, not the time. EXPECTED FINDINGS: Air conduction (AC) greater than bone conduction (BC) 2 to 1 ratio. Auditory Screening Test: WEBER TEST TECHNIQUE: Place a vibrating tuning fork on top of the client's head. Ask client if sound is heard best in the right, left or both ears equally EXPECTED FINDINGS: Sound is heard equally in both ears (negative Weber test) PRESBYOPIA Diminishing ability to see close objects or read small print PRESBYCUSIS Hearing loss, loss of acuity for high frequency tones Breast Self Examination (BSE) a) Inspection can be done in front of a mirror. Palpation can be done in a shower. b) BSE should always be performed following a menstrual cycle. e) If client is menopausal, BSE should be performed on the same day of each month. Breast Examination: Expected findings FEMALE: Breast should be firm, elastic, and without lesions or nodules. Breast tissue may feel granular or lumpy bilaterally in some women MALE: No edema, masses or tenderness should be present. Areolas are round and darker pigmented. Breast Examination: Unexpected findings FEMALE: Fibrocystic breast disease is characterized by tender cysts that are often more prominent during menstruation MALE: Unilateral or bilateral (but asymmetrical) gynecomastia in adolescent boys or bilateral gynecomastia in older adult males may be present. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Positioning The posterior Thorax is best assessed with the client sitting or standing. The anterior Thorax can be assessed with the client sitting, lying or standing. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Anatomical Reminder The right lung has three lobes, while the left lung has two lobes. Auscultating the right middle lobe is done using the axillary sites. CHEST LANDMARKS (used to perform assessments correctly and describe the findings) 1. midsternal line is through the center of the sternum 2. midclavicular line is through the midpoint of the clavicle 3. anterior axillary line is through the anterior axillary folds 4. midaxillary line is through the apex of the axillae 5. posterior axillary line is through the posterior axillary fold 6. right and left scapular lines are through the inferior angle of the scapula 7. vertebral line is along the center of the spine. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Percussion and auscultatory sites POSTERIOR THORAX: Sites are between the scapula and the vertebrae on the upper portion of the back. Below scapula, sites are along the right and left scapular lines ANTERIOR THORAX: Site are along the midclavicular lines bilaterally with several sites at the anterior/midaxillary lines bilaterally in the lower portions of the chest wall and on either side of the sternum following along the rib cage. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Inspection 1. Shape - anteroposterior diameter should be half of the transverse diameter. 2. Symmetry - chest should be symmetric with no deformation of the ribs, sternum, scapula, or vertebrae with equal movements of respiration. 3. ICS - you should not see excessive retractions Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Respiratory Effort a) Rate and pattern - regular with 16 to 20 / minute b) Character of breathing (diaphragmatic, abdominal, thoracic) c) Use of accessory muscles d) Chest wall expansion e) Depth of respiration (unlabored, quiet breathing is the expected finding). f) If cough is productive, note color and consistency of sputum g) trachea should be midline Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Palpation 1. Surface characteristics include tenderness, lesions, lumps, and deformities. Tenderness is an unexpected finding. 2. Chief excursion or expansion of the posterior thorax 3. Vocal (tactile) fremitus: expected findings - vibration is symmetric and more pronounced at the top. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Percussion a) Normal percussion of the thorax should result in resonance. b) Abnormal findings and significance : (1) Dullness - caused by fluid or solid tissue, this can indicate a pneumonia or tumor; (2) Hyperresonance - caused by the presence of air, this can indicate pneumothorax or emphysema. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Auscultation EXPECTED SOUNDS: a. Bronchial - loud, high pitched, expiration heard longer than inspiration over the trachea. b. Bronchovesicular - medium pitch and intensity, equal inspiration and expiration, and heard over the larger airways c. Vesicular - use of accessory muscles Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : Abnormal or Adventitious Sounds 1) Crackles or rales - fine to coares popping heard as air passess through the fluid or re-expands small airways 2) Wheezes - high-pitched whistling, musical sounds as air passes through narrowed or obstructed airways, usually louder in expiration 3) Rhonchi - Coarse sound heard during either inspiration or expiration resulting from fluid or mucous. MAY CLEAR WITH COUGHING. 4) Pleural friction rub - Grating sound produced as the inflamed visceral and parietal pleura rub against each other during inspiration or expiration. 5) Absence of breath sounds should be noted. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - Cardiac Cycle - SYSTOLE - Lub - Apex Closure of the MITRAL and TRICUSPID valves signals the beginning of VENTRICULAR SYSTOLE (contraction) and produces the S1 SOUND (Lub). This is heard best with the DIAPHRAGM of the stethoscope at the APEX. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - Cardiac Cycle - DIASTOLE - Dub - Aortic Area Closure of the AORTIC and PULMONIC valves signals the beginning of VENTRICUAR DIASTOLE (relaxation) and produces the S2 SOUND (Dub). This is heard best with the diaphragm of the stethoscope as the AORTIC AREA Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - Ventricular Gallop - S3 - BELL An S3 sound (VENTRICULAR GALLOP) is produced by RAPID VENTRICULAR FILLING and can be a normal finding in children and young adults. This is heard best with the BELL of the stethoscope Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - S4 - Strong Atrial Contractions - Bell An S4 sound is produced by a strong atrial contraction and can be a normal finding in older and athletic adults and children. This is heard best with the bell of the stethoscope. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - Murmurs - Swishing - Bell Murmurs are heard best when blood volume is increased in the heart, or the flow of blood is impeded or altered. A murmur is heard in the heart as a blowing or swishing sound. This is heard best with the bell of the stethoscope. a) Systolic murmurs are heard just after S1 b) Diastolic murmurs are heard just after S2 Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - Thrills THRILLS are palpable vibration that may be present with murmurs or cardiac malformations Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - Bruit BRUITS are produced by obstructed peripheral blood flow and are heard as a blowing or swishing sound with the Bell of the stethoscope. CARDIAC LANDMARKS 1. AORTIC - just right of the sternum at the second ICS 2. PULMONIC - just left of the sternum at the second ICS 3. ERB'S POINT - just left of the sternum at the third ICS 4. TRICUSPID - just left of the sternum at the fourth ICS 5. APICAL / MITRAL - left midclavicular line at the fifth ICS ASSESSMENT FOR RIGHT-SIDED HEART FAILURE Inspect JUGULAR VEINS with client in bed with the head of the bed at 30 to 45 degrees angle to assess for RIGHT SIDED HEART FAILURE 1) No neck vein distention should be noted 2) Jugular venous pressure (JVP) - should be measured at less than 2.5 cm. above the sternal angle. Assessment of ANTERIOR AND POSTERIOR THORAX AND LUNGS : HEART - PMI APICAL PULSE or point of maximal return (PMI) a) May be visible just lateral to the left midclavicular line at the fifth ICS. With female clients, displace the breast tissue. b) Palpate where it was visualized, try to palpate the location and the size. c) Expected finding - The apical pulse should be just lateral to the left midclavicular line at the fifth ICS and no longer than 2.5 cm in diameter. Heaves or lifts are abnormal, visible elevations of the chest wall that are seen with heart failure, and are often located along the left sternal border or at the PMI What is a THRILL Use the ULNAR surface of the hand to feel for vibrations similar to that of a purring kitten. This is NOT AN EXPECTED FINDING. AUSCULTATION OF THE HEART Positioning the client in three different ways allows for optimal assessment of heart sounds, as some extra or abnormal sounds are accentuated by the various positions. 1. Sitting, leaning forward 2. Lying supine 3. Turned toward the left side (best position for picking up extra heart sounds, as some extra heart sounds or murmurs. Determining the HEART RATE Listen and count for 1 minute. Determine if rhythm is regular. If a dysrhythmia exists, check for a pulse deficit in which the radial pulse will be slower than the apical pulse. Sites to assess for BRUITS ( produced by obstructed peripheral blood flow and are heard as a blowing or swishing sound with the Bell of the stethoscope). 1. Carotid arteries - over the carotid pulses 2. Abdominal aorta - just below the xiphoid process 3. Renal arteries - midclavicular lines above the umbilicus on the abdomen 4. Iliac arteries - midclavicular lines below the umbilicus on the abdomen 5. Femoral arteries - over the femoral pulses. Assessment of the ABDOMEN a) Includes observing the shape of the abdomen, palpating for masses, and auscultating for vascular sounds b) Use INSPECTION, AUSCULTATION, PERCUSSION and PALPATION. This is done to allow the bowel sounds to be heard without being disturbed or distorted by percussion or palpation assessments. Preparing and positioning a client for an abdominal assessment Have the client VOID prior to an abdominal examination. Position the client lying SUPNE with arms at his sides with his knees slightly bent. ABDOMINAL LANDMARKS Designated using the UMBILICUS. Imaginary vertical and horizontal lines pass through the umbilicus that divide the abdomen into 4 quadrants: XIPHOID PROCESS - upper boundary SYMPHYSIS PUBIS - lower boundary 1. Right Upper Quadrant - RUQ 2. Left Upper Quadrant - LUQ 3. Right Lower Quadrant - RLQ 4. Left Lower Quadrant - LLQ ASSESSMENT FOR THE ABDOMINAL SKIN 1. Lesions - bruises, rashes or other lesions 2. Scars - location and length 3. Striae or stretch marks that are silver in color (expected findings) 4. Dilated veins - unexpected related to cirrhosis or inferior vena cava obstruction 5. Jaundice, cyanosis, or ascites - may be associated with cirrhosis ABDOMINAL AUSCULTATION Bowel sounds are produced by the movement of air and fluid in the intestines. The most appropriate time to auscultate bowel sounds is in between meals a) Listen with the diaphragm of the stethoscope in all four quadrants b) Expected sounds are high pitched clicks and gurgles which are heard 5 to 30 times per minute. To determine for absent bowel sounds, you must listen for full 5 minutes without hearing anything. Abdominal Friction Rubs Caused by the rubbing together of the inflamed layers of the peritoneum. 1. Listen with diaphragm over the liver and the spleen 2. Ask client to take a deep breath while you listen for grating sounds like sandpaper rubbing together TYMPANY a) Expected percussion sound heard over most of the abdomen. A lower pitch tympany over the gastric bubble in the left upper quadrant may be heard. b) Dullness over the liver or a distended bladder may be heard. ASSESSMENT OF KIDNEY TENDERNESS Assessed by FIST percussion over the costovertebral angles at the scapular lines on the back, The expected finding is no tenderness. EXPECTED CHANGES WITH AGING: Breasts With menopause, breast tissues atrophies and is replaced with adipose tissue, making it feel softer and more pendulous. The atrophied ducts may feel like thin strands. Nipples no longer have erectile ability and may invert. EXPECTED CHANGES WITH AGING: Lungs Chest shape changes. Chest expansion diminishes. Cough reflex diminishes. Cilia becomes ineffective. Alveoli dwindle and there is greater resistance and higher risk of infection. Kyphosis takes place due to osteoporosis and weakened cartilage. EXPECTED CHANGES WITH AGING: Cardiovascular system Systolic hypertension is common with atherosclerosis. PMI becomes more difficult to palpate. Coronary blood vessel walls thicken and become more rigid with a narrowed lumen. Cardiac output decreases, weak contraction leads to poor activity tolerance. Heart valves stiffen due to calcification. Left ventricle thickens. Pulmonary vascular tension increases. Systolic blood pressure rises. Peripheral circulation lessens. EXPECTED CHANGES WITH AGING: Abdomen Abdominal muscles decline in tone and more adipose tissues create a rounder gird. Diminished signs and symptoms of peritoneal inflammation, such as less pain, guarding, fever. Saliva, gastric secretions and pancreatic enzymes decrease. Smooth muscle changes.
https://quizlet.com/24540122/fundamentals-of-nursing-finals-from-ati-concepts-outlines-and-definitions-for-foundations-of-nursing-flash-cards/
CC-MAIN-2018-17
en
refinedweb
EDIT: Mainly the GetObjectsAt and CheckCollisions methods Hope this gives some insight. 1. I meant profiling...2. structuring your code better would help a lot in sharing & understanding for people which might look upon your code...3. At the very least, a buch of conveniently placed (as in when calling a fnction or traversing a loop iteration) "debug" printf with timestamps might help. 1a. Now i understand your suggestion. I have tried using std::cout for timestamps, where i call the current time and track how long it takes to get through a loop. Would you like me to place this back in and check how long some loops are taking? 2a. Ill try to clean it up as to help others understand and not get lost. Since it is my code, I understand it fully, but I will be sure to make it more readable for not only you guys, but for others who look at it for learning. 3a. refer to 1a. 1. some quad trees may actually slow down performance instead of improving it...2. Also note, if you have any static objects which never move, you don't need to rebuild a quadtree for them every tick, you need to add them just once... 1a. I understand what you mean. I do know that my quad tree does not destroy the quadrants that were made in the last loop. This meaning that if it broke itself down into 10 quadrants, on the next iteration, all of the quadrants from before will be cleared, but the leaves and branches are never reset. I Imagine that could cause some unnecessary looping in the tree search (maybe not actually). 2a. Yeah i currently do not add the static units to the tree at all (since im not worried about collisions with them at this moment). The only units that are added to the quad tree are the ships that move.. Boost has a nice spatial partitioning module which you can adapt to your needs. It's pretty fast and doesn't even require linking, it's headers only.. objects refers to anything that is a collide-able object. As of right now, i am only adding ships to the quad-tree. Later ill make other objects collide-able. As for the GetObjectsAt() method: I did not write any of the quadtree code. I was fortunate enough to find a post where someone had made a adaptive quadtree. link: could be that i do not understand it well enough to use it correctly, but i think i have a pretty good idea. Also, in the comments at the bottom of that page, she talks about creating a function that checks an area around a unit rather than passing a point. The current usage does work, however. as it breaks down into smaller pieces, each object is in only 1 node. so every node that is higher than the "leaf" node is empty, until it hits the leaf. I did some of the printing for it, and it is returning the correct amount of units that are in the area (unless i'm misunderstanding what you mean). Boost has a nice spatial partitioning module which you can adapt to your needs. This seems great, but looking over it has me a bit confused. I'm still learning as much as i can in C++, so some of this seems out of my depth. Not so sure you were fortunate. When I look at this: if ( !objects.empty() ) returnedObjects.insert( returnedObjects.end(), objects.begin(), objects.end() ); You've just added every single object to the list that's returned. Then a little further down, you add every single object again, multiple times. What am I missing? I can't speak for the implementation, but I GUESS that if it's not a leaf node it should not have objects in itself. So we're basically looking at a bunch of dead code. IIF on the other hand, the non-leaf node contains objects... It is unlikely that Google shares your distaste for capitalism. - DerezoIf one had the eternity of time, one would do things later. - Johan Halmén Ok then, I'm not sure. I can't tell anymore about it without seeing the whole code, but rather than do that I would learn about profiling as pkrcel suggested. EDIT: The fact that the class is called QuadTree to me implied the whole tree, if it's a node then "Node" would be a better name. You've just added every single object to the list that's returned. right, i see where you are confused. Essentially this code runs through each quadtree level until it hits a level that STILL has objects in it. This may clear it up for you a bit: In addObjects() we add all objects into the quad tree. if the quadrants hits the maximum units, it subdivides, creating the leaves and clearing itself of any objects in the currently level, but adding them to the next level down. so in the getObjectsAt() function, every node that is (!isLeaf) will be empty for objects, and even though it seems that it is constantly adding objects, technically the objects in the higher level quadrants are empty. So we're basically looking at a bunch of dead code. Like i said before, im learning this stuff as i go, so i have no idea how to just pass over the dead code directly to the isLeaf quadrant. Talking about getObjectsAt() again, ill go through what i think it does, and you can correct me if im wrong. if the quadrant is a leaf, return all objects in the leaf. else, create two vectors to store data. if(!objects.empty()) add to returnedObjects. however, this should always be empty because we move objects down the leaves and erase where they were before. then in the for loop here, we find what node contains x,y and continue to traverse the tree until we find a leaf, and add those objects to the returned objects. Alright. What is numObjectsToGrow set at? Is that the value that when set low causes slowness? It shouldn't be really low. It'll be faster to check collisions with 10 objects than to loop through a ton of nodes. Also how many objects are there in total (so I don't have to read to find it?) no problem trent i currently have numObjectsToGrow set at 20. ive tried different variances, the lower the worse it is, the higher the better (to a point... like 40). total objects currently is 200. i would like to get it to where it can run a maximum of 500. if this is not possible, i am 100% ok with just running 250-300. But this is all end game, if i can get past this problem. as of now, im running 100-200 ships and its causing my frames to drop to around 30 (from 60) Well I don't feel like doing a full analysis of the quadtree but I don't think 500 objects is unreasonable at all, depending on system specs a little bit. FWIW in my game I don't even have a full blown quadtree, I just have a grid I put objects into. It's very simple, just divide each area into 100x100 (or whatever) grid array and check objects in [x][y] in the array. In total there are probably up to a couple thousand objects in some areas and it runs on an old netbook and even a Raspberry Pi. So this leaves me questioning the efficiency of the quadtree implementation you're using. Again, I'd have to do a full analysis of it to figure out what's slow OR profile it but since you're the one who already is set up to build the project, you could profile it. What compiler are you using? I've done profiling with GCC-based compilers but I believe it doesn't work well on Windows. I've also used the profiler from Software Verify (which is insanely expensive, but you can get a free trial) which works with GCC/MSVC built code on Windows. If you're not using Windows than GCC-based profiling is easy to set up. If you're using MSVC then Google for profiling for that, I'm sure there are free ways to do it, probably without installing anything extra. I've even heard of free profilers that work on GCC-based code for Windows so Google for that also. I'm not a spatial indexing expert but there's really something I don't grasp in this quadtree implementation. Namely, in addobject there's something like this: void QuadTree::AddObject(GameObject* object){ if (isLeaf) { //ad objs and split quadrant if reached a max obj threshold then.. return; } //otherwise: for (each child node){ //recursively check if node contains object add object to that node and then //in ANY other case??? *** objects.push_back(object) // should we ever end here? I surely hope not } I agree with Trent that right now we should look into the Quadtree implementation: my impression is that there might be some unnecessary object loading into returned vectors and you end up doing definitely too many collision checks. Or might be that you're rebuilding the tree from scratch for each object and things get exponentially heavier even for a small set of objects. But I'm just speculating, eh. Seems that it adds those objects if they don't lie within the quadtree, and then returns them where I mentioned earlier. Check to see if that last line, objects.push_back(object) ever gets run. Well I don't feel like doing a full analysis of the quadtree but I don't think 500 objects is unreasonable at all......What compiler are you using? i am currently using visual studio 2012 on windows.Thinking of it now, i have been so focused on using this quad tree, because i thought that cleaning up my code with it would be more efficient, that i forgot that my old system actually worked much better. what i did before was create 12 quadrants in the display area and update each ship based on what quadrant they are in. then i would test each ship for collision in their quadrant. the problem with this was that it cause a TON of extra, messy code. but it did work MUCH better. the other thing that made this hard to use was that if every single ship was in the same quadrant, i had no idea what to do to split it up (since i was just using lists, without the adaptive quadtree implemented.)If you would like to see an example of what i am talking about let me know and ill post the previous code. It sounds like what you are talking about with the 100x100 grid, but Im not sure i fully understand what you mean by it. I'm not a spatial indexing expert but there's really something I don't grasp in this quadtree implementation. Looking over what you are referring to:AddObject()if(isLeaf) put the ship into object list in this leaf, if the max size is hit, create new leaves and move the objects to the leaves while removing them from this tier of the tree. for(nodecount) this only runs if the last part was falsecheck which sub node contains the gameobjects point, then add it to that quadrant. as for the last part: //in ANY other case??? *** objects.push_back(object) // should we ever end here? I surely hope not I have no idea why this is here, and i agree... it should never get to this code. i have it commented out now in my program and it seems not to effect it at all. Last comment on the page you linked says there are mistakes in the code and provides a link to a corrected version. Yeah aikei, I did see that. I tried to implement his changes, but they caused the program to crash constantly. Essentially the change he did was cause the program to delete all sub nodes on quad.clear(). However when I tried using the code, I would get code block errors. I can redo the changes and tell you exactly what happens if you like. This is intersting. Looking at the edits Saduino put in the Quadtree implementation I can't see why you're getting errors, BUT it cleaned the AddObject code and put a needed delete[] for the sub nodes. Also, you're rebuildng the quadtree each timer tick assuming quad points to the Root node: without that delete[] you end up with unneeded leaves in the tree (and possibily duplicate ones? CreateLeaves just uses new[] each time, is there some kind of memory leak?) Also, why is this called "adaptive" if it just rebuilds every frame? ...nothing wrong I assume since it shouldn't cost that much rebuilding it, even thou standard memory allocation is kinda slow. Pkrcel- that is what I thought as well. I figured that adding the delete [] nodes would be beneficial because you are resetting the tree. The current way it is set up, it never deletes the nodes after the clear... so that means that all of the nodes that were created in the previous are just sitting around for nothing...when I added the delete, it gave me a code block error. It's my assumption that when it runs through the clear function, it also deletes the top most nodes, which in turn cause the program to crash when it tries to add new items to the top most nodes that have just been deleted (since they are never redeclared). I think you are correct in saying that there could be a memory leak issue. The only reason, I believe, that it is considered adaptive is because it adapts it's quadrants to the number of units. In her previous posts on her site she created another tree that doesn't change due to units in the area.I understand why youre saying you don't understand why it's considered adaptive since it's constantly clearing and rebuilding... I will show you how you can use boost RTree for your purposes. First of all, you need to start using vectors, as I showed you earlier: Then, you need to define a rectangle class which contains its lower left and bottom right points, as well as its center and size, just in case: Now, each of your actors will store its position as a rectangle, along with other things: class Actor { Rect boundingBox; ... public: const Rect& GetBoundingBox() const { return boundingBox; } const Vec2& GetPosition() const { return boundingBox.center; } void SetPosition(const Vec2& newPos) { boundingBox.SetCenter(newPos); } ... }; When you move your actor, you set its new position with the SetPosition() method. You could also define a MovePosition() method which will add argument vector to the center of the bounding box, instead of replacing it: void MovePosition(const Vec2& moveBy) { boundingBox.SetCenter(boundingBox.center+moveBy); } This is how you can move your actor: Actor* actor; Vec2 positionChange(2, 1); actor->MovePosition(positionChange); or actor->MovePosition(Vec2(2,1)); Now you have a bounding box for each of your objects, which should dynamically change every tick. Now we need to register your new classes with boost so they could be used for boost RTree instead of internal boost classes: #include <boost/geometry.hpp> #include <boost/geometry/geometries/point.hpp> #include <boost/geometry/geometries/box.hpp> #include <boost/geometry/geometries/register/point.hpp> #include <boost/geometry/geometries/register/box.hpp> #include <boost/geometry/index/rtree.hpp> //first agrument is the name of your 2D vector class, second - value type used for vector coordinates by your class, and then names of variables or methods used to access x and y coordinates respectively. BOOST_GEOMETRY_REGISTER_POINT_2D(Vec2,float,cs::cartesian,x,y); //first argument - name of your rectangle class, then name of your point class and names of your variables or methods used to access to top left and bottom right points of the rectangle respectively BOOST_GEOMETRY_REGISTER_BOX(Rect,Vec2,topLeft,bottomRight); Now, let's proceed to RTree. You could create a permanent RTree and then clear() it and insert() actors as needed, however, according to the manual, packing algorithm which involves bulk loading of all objects at the same time, is much more effective, and it is used when using a specific RTree constructor, so we will probably be better off destroying and creating our RTree each tick. So, somewhere in your code you define RTree and declare a pointer to it. I assume you will store pointers to Actors in the RTree: namespace bgi = boost::geometry::index; struct AreRTreeValuesEqual; struct GetRTreeIndexable; //8 is the number of actors per node. you could try and specify a different number. typedef bgi::rtree<Actor*, bgi::quadratic<8>, GetRTreeIndexable, AreRTreeValuesEqual> RTree; RTree* pRTree = NULL; I forward declared AreRTreeValuesEqual and GetRTreeIndexable function objects just to show you that you need to at least declare them before defining RTree. GetRTreeIndexable() takes a pointer to Actor (the first argument of the bgi::rtree template specified above) and must return a const reference (`const &`) to an indexable, that is, your bounding box. AreRTreeValuesEqual() should return true if two Actor* values passed are equal or not (I'm not sure if default functors are adequate for pointer comparison, so I decided to write my own implementation just in case). Here it is: Now, let's assume you have a vector of actors somewhere: std::vector<Actor*> actors; At the start of every tick you do this: if (pRTree) delete pRTree; pRTree = new RTree(actors.begin(),actors.end()); This will create an RTree of all your objects. Let's proceed to queries. Let's say you want to check if any actors are collided with the current one: Actor* currentActor; std::vector<Actor*> results; //find all actors intersecting with the current one and insert results into the "results" vector: pRTree->query(bgi::intersects(currentActor->GetBoundingBox()),std::back_inserter(results)); Now, theoretically, if results vector is not empty, then you have collision. However, you will probably get collisions of the actor with itself too, so you need to check if (results.size() > 1) to learn if there is any collisions or not. EDIT:Note that since Rect doesn't have a default constructor you will need to specify something like this in the Actor's constructor: Actor::Actor(whatever) : boundingBox (starting position, size) { ... } It might still be a good idea to specify default constructors for the Vec2 and Rect classes, since boost might require them to instantiate blank objects: struct Vec2 { Vec2() { x =0; y = 0; } }; struct Rect { Rect() {} //maybe specify some starting parameters, but I don't think you will ever need default constructor }; Also, you might want to specify a SetSize() method for Rect, but note that you should also recalculate topLeft and bottomRight after changing size. Ah right, calling delete on the root node would cause issues since you have to explicitly call the constructor again (or if it was static, uhm, what? ). But for how I see it now, calling clear() on your item named quad should not cause issues since it calls delete[] only on the children nodes(as it should, the root node is not part of any array). EDIT: man, I've read Akei post and now I feel dizzy EDIT2: I just notice my first paragraph didn't make sense due to bad writing. But for how I see it now, calling clear() on your item named quad should not cause issues since it calls delete[] only on the children nodes(as it should, the root node is not part of any array).? I will show you how you can use boost RTree for your purposes. Wow what a post. I have a ton of appreciation for everything there. Ill deffinately look into using it now. I am positive that I will have several problems still, but this is a huge start. Again, thank you for taking the time to do that! Question: I am currently using lists as my object holders. Does it matter between that and a vector? UPDATE: Aikei, it seems ive already messed up boost. during installation somewhere. now im getting a constant error: /Microsoft was unexpected at this time. Ill try to figure it out in the mean time unless you have heard of this. No. Lists are fine too. UPDATE: Aikei, it seems ive already messed up boost. during installation somewhere. now im getting a constant error: /Microsoft was unexpected at this time. Ill try to figure it out in the mean time unless you have heard of this. No idea what this could mean. Maybe your boost is installed into a directory which has parentheses in its name (like Program Files (x86)) and you have this path specified somewhere in visual studio? Because visual studio is notorious for having problems with parentheses.Note that you don't need to link any boost binaries for this, you only need headers.EDIT: Note by the way, that my rectangles are actually squares, you might want to use width and height instead of a single size parameter, since you probably need actual rectangles, not squares.? Can't check right now, but this sounds as the qudrant is deleted and THEN the vanished nodes are trying to be accessed (and will likely cause a segfault). Check if the code clears the nodes array too soon or if you try to access the nodes right after deletition. Check if the code clears the nodes array too soon or if you try to access the nodes right after deletition. I feel pretty good about this, all i had to do was sleep on it and i have the answer i believe.! here ill post the code again: Initially, when the tree is made, the topmost for isLeaf = true; however, it breaks down upon hitting the max number of units and changes to false, and never gets reset to true. now i have to figure out how to set it back to true... I may need to make a new function that is something like: void resetLeaf() {isLeaf = true;} Update:: This did fix the problem! however, frames still drop to ~20. So still having issues, but it did fix the clear function! aybe your boost is installed into a directory which has parentheses in its name (like Program Files (x86)) and you have this path specified somewhere in visual studio? well there is a video i was watching about installing it, and it said i need to run the .\b2 install portion, but i never ran it. so now i am just lost. I tried searching my comp for anything that had boost in it, and i cannot find it anywhere. I think i will just try to call the headers in VS12 and see if they work. If not, i honestly do not know how to fix what i did.! Right! Haven't looked at this gotcha....the improved clear() method should reset the Leaf status upon deleting the nodes[] array. void Quadtree::Clear() { objects.clear(); if ( !isLeaf ) { for ( int n = 0; n < NodeCount; ++n ) { nodes[n].Clear(); } } delete [] nodes; isLeaf = true; *** } Even thou, seems performances drop anyway...we'll have to look elsewhere.
https://www.allegro.cc/forums/thread/614816/2
CC-MAIN-2018-17
en
refinedweb
I'd like to know a bit more about the problems with namespace::autoclean changes. But I'd really like to hear your thoughts on viable approaches to dependency tracking. Please do elaborate. TGI says moo In reply to Re^2: Why I hate Dist::Zilla by TGI in thread Why I hate Dist::Zilla by TGI 1. Keep it simple 2. Just remember to pull out 3 in the morning 3. A good puzzle will wake me up Many. I like to torture myself 0. Socks just get in the way Results (284 votes). Check out past polls.
http://www.perlmonks.org/index.pl?parent=903852;node_id=3333
CC-MAIN-2016-44
en
refinedweb
On Fri, 2005-10-28 at 15:40 +0200, Mark Wielaard wrote: > Interesting. What was the total running time before/after this patch? I don't have the modified library anymore. > Could you post your test program? import java.io.*; import java.util.*; public class pread { public static void main(String args[]) throws Exception { Properties p = new Properties(); int i = 0; while (i != args.length) p.load (new FileInputStream (args[i++])); } } AG
http://lists.gnu.org/archive/html/classpath-patches/2005-10/msg00548.html
CC-MAIN-2016-44
en
refinedweb
Summary: Use a Windows PowerShell function to find WMI classes with specific qualifiers. Microsoft Scripting Guy Ed Wilson here. In Thursday’s article, I talked about using the Set-WmiInstance cmdlet to work with WMI classes. One of the parameters, the class parameter, works with WMI singleton objects. Now, it is certainly possible to use WBEMTest to find singleton WMI classes. Such a WMI class is shown in the following figure in the WBEMTest utility. But with 1,085 WMI classes in Root\Cimv2, it is faster and more fun to use a WMI schema query. WMI schema queries are mentioned on MSDN, but there are no Windows PowerShell examples. I decided I needed to write a Windows PowerShell function that would query the schema to find the singleton classes for which I was looking. In addition, there are other class qualifiers I was interested in seeing as well. For example, there is a supportsupdate qualifier that lets me know that I can use that class to make modifications to a computer. There are other qualifiers that are even more important: abstract and dynamic. As an IT pro, I want to query dynamic WMI classes, and not the abstracts. For ease of use, I uploaded the script to the Scripting Guys Script Repository. I ended up writing a function I could use to find classes with specific qualifiers. As shown in the following figure, there are a few singleton WMI classes. I opened the function in the Windows PowerShell ISE, ran the script once to load the function into memory, and then I went to the command pane and typed the following command: Get-WMIClassesWithQualifiers -qualifier singleton The command and associated output are shown in the following figure. A WMI schema query queries the meta_class WMI class. It uses the isa keyword to specify from which WMI class I want to return the schema information. That part is rather simple. The difficult part was getting the quotation marks placed in the right position to enable automatic querying. Here is the query line I derived: $query = “select * from meta_class where __this isa “”$($class.name)”” “ I am interested in the qualifiers; therefore, I choose only the name of the WMI class and the qualifiers. This line appears is shown here: $a = gwmi -Query $query -Namespace $namespace | select -Property __class, qualifiers If the qualifiers contain the qualifier I am looking for, I return the WMI class name: if($a.qualifiers | % { $_ | ? { $_.name -match “$qualifier” }}) { $a.__class } The core portion of the script, with the aliases removed is shown here: Param([string]$qualifier = “dynamic”, [string]$namespace = “root\cimv2”) $classes = Get-WmiObject -list -namespace $namespace foreach($class in $classes) { $query = “Select * from meta_class where __this isa “”$($class.name)”” “ $a = Get-WmiObject -Query $query -Namespace $namespace | Select-Object -Property __class, qualifiers if($a.qualifiers | ForEach-Object { $_ | Where-Object { $_.name -match “$qualifier” }}) { $a.__class } } #end foreach $class Well, that is about it for today. I hope you enjoy the function, and have an awesome quite interesting, if you have to look for WMI clsses with special properties. But I'm afraid there is hardly any more use for querying the meta classes, than that, what you presented here. So it is really a special purpose area, but it's good to know how if you have to …! Klaus.
https://blogs.technet.microsoft.com/heyscriptingguy/2011/10/22/use-a-powershell-function-to-find-specific-wmi-classes/
CC-MAIN-2016-44
en
refinedweb
this keyword in Java The this keyword is used when you need to use class global variable in the constructors. this is a reference to the current object — the object whose method or constructor is being called. You can refer to any member of the current object from within an instance method or a constructor by using this. In java, this is a reference variable that refers to the current object. It is useful for a method to reference instance variables relative to this keyword. this is a keyword in Java. Which can be used inside method or constructor of class. It (this) works as a reference to current object whose method or constructor is being invoked. this keyword can be used to refer any member of current object from within an instance method or a constructor. The this keyword can be used to refer current class instance variable. If there is ambiguity between the instance variable and parameter, this keyword resolves the problem of ambiguity. this keyword with field(Instance Variable) -: this keyword can be very useful in case of Variable Hiding. We can not create two Instance/Local Variable with same name. But it is legal to create One Instance Variable & One Local Variable or Method parameter with same name. In this scenario Local Variable will hide the Instance Variable which is called Variable Hiding. Using this with a Constructor; } } Example of this keyword as method parameter public class Sample1 { public static void main(String[] args) { Sample2 obj = new Sample2 (); obj.i = 10; obj.method(); } } class Sample2 extends Sample1 { int i; void method() { method1(this); } void method1(Sample2 t) { System.out.println(t.i); } }
https://www.mindstick.com/Articles/1735/this-keyword-in-java
CC-MAIN-2016-44
en
refinedweb
I have a really simple unit test that is failing in PyCharm. import unittest class SimpleTestCase(unittest.TestCase): def test_alpha(self): from selenium.webdriver.common.utils import free_port from selenium import webdriver driver = webdriver.PhantomJS(executable_path=PHANTOMJS_PATH, port=free_port()) driver.quit()prettyPrint(); The test times out after 30 seconds. The exception I get when I run the test from within PyCharm is WebDriverException: Message: 'Can not connect to GhostDriver'prettyPrint(); However, everything works fine when I run it from my terminal. What is strange is that when I change free_port() call into a number, the test passes within PyCharm! # if 50000 is a free port - this test PASSES driver = webdriver.PhantomJS(executable_path=PHANTOMJS_PATH, port=50000)prettyPrint(); To quickly recap: - Test passes when I run it from my terminal - Test passes in PyCharm if port is specified manually - Test fails in PyCharm if port=free_port() # For convenience, the `free_port()` code snippet is here # selenium.webdriver.common.utils.freeport def free_port(): free_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) free_socket.bind(('127.0.0.1', 0)) free_socket.listen(5) port = free_socket.getsockname()[1] free_socket.close() return portprettyPrint(); What is PyCharm doing that is making the test unable to connect to Ghostdriver? For anyone who wanting to try this out on their machine, only selenium and phantomjs need to be installed. pip install selenium sudo npm install -g phantomjsprettyPrint();
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205814199-PyCharm-unable-to-connect-to-GhostDriver
CC-MAIN-2016-44
en
refinedweb
On Wed, 2011-05-11 at 00:18 +0200, Linus Walleij wrote:> 2011/5/2 Joe Perches <joe@perches.com>:> > On Mon, 2011-05-02 at 21:16 +0200, Linus Walleij wrote:> >> From: Linus Walleij <linus.walleij@linaro.org>> >> diff --git a/drivers/pinmux/core.c b/drivers/pinmux/core.c> > Trivial comments follow> >> +static inline int pin_is_valid(int pin)> >> +{> >> + return ((unsigned)pin) < MACH_NR_PINS;> >> +}> > Couldn't pin just be declared unsigned or maybe u32?> No, because like in the GPIO subsystem you *may* want to send in invalid> pins, and those are identified by negative numbers.Then I think this is clearer and the compilershould produce the same code.static inline bool pin_is_valid(int pin){ return pin >= 0 && pin < MACH_NR_PINS;}cheers, Joe
https://lkml.org/lkml/2011/5/10/477
CC-MAIN-2016-44
en
refinedweb
Separation of Concerns is a fundamental design principle within the field of software engineering and is vital for achieving maintainable systems of any substantial complexity. One characteristic shared by designs possessing good separation of concerns is minimized coupling. Coupling occurs when one component creates a dependency upon another within a system, thus reducing the change tolerance of the dependent component. Ensuring that coupling between components is kept to a bare minimum maximizes the change tolerance of the system and in turn improves overall maintainability. One set of design guidelines developed by Ian Holland at Northeastern University in 1987 purposed for minimizing coupling within software systems is The Law of Demeter. The Law of Demeter seeks to reduce unnecessary coupling by prescribing a set of limiting guidelines governing when one object may access another. Demeter Defined The Law of Demeter can be defined as follows: For any class C, a method M belonging to C may only invoke other methods belonging to the following: - Class C - Members of C - Parameters of M - Objects created by M - Objects created by other methods of class C called by M - Globally accessible objects Note: In keeping with the spirit of the Law of Demeter, this restriction would also extend to any member types of objects belonging to any of the above. That is to say, an object should only understand the internal structure of objects it creates directly, objects passed as method parameters, or objects which are otherwise globally accessible within an application. By observing these guidelines, developers are forced to create shallow relationships between objects which in turn reduce the number of classes which may be affected by changes within a system. The following diagram represents the restrictions imposed by the Law of Demeter: In this diagram, Object A has a dependency on Object B which composes Object C. Under the Law of Demeter, Object A is permitted to invoke methods on Object B, but is not permitted to invoke methods on Object C. The Paperboy Example One example set forth by David Bock to help explain the Law of Demeter is an interaction between a customer and a paperboy. The following presents the paperboy example in C#. Consider an application which consists of the types Paperboy, Customer, and Wallet: public class Paperboy { decimal fundsCollected; public Paperboy() { Customers = new List(); } public List Customers { get; set; } // ... } public class Customer { public Customer() : this(null) { } public Customer(Wallet wallet) { Wallet = wallet; } public Wallet Wallet { get; set; } } public class Wallet { public Wallet(decimal money) { Money = money; } public decimal Money { get; set; } } Now, consider the following Paperboy method for collecting the sum of $2.00 from all customers: public void CollectPayments() { // Paper costs $2.00 decimal payment = 2m; foreach (Customer customer in Customers) { if (customer.Wallet.Money >= payment) { customer.Wallet.Money -= payment; fundsCollected += payment; } } } Within the CollectPayments() method, the Wallet of each Customer is accessed in order to collect the payment amount required by the Paperboy. Since the Wallet class is not a member of the Paperboy class, is not passed in as a parameter to the Paperboy method, is not created within a Paperboy method, and is not globally accessible, this code violates the Law of Demeter according to the guidelines established above. By allowing the Paperboy class to access the Wallet class directly, the Paperboy class becomes coupled to both the Customer class and to the Wallet class. Changes to the Wallet class may result in the need to change both the Customer and Paperboy classes. To solve this problem, the payment method can be encapsulated within a new MakePayment() method of Customer: public decimal MakePayment(decimal amount) { if (Wallet.Money >= amount) { Wallet.Money -= amount; return amount; } return 0m; } The CollectPayment() method could then be rewritten as follows: public void CollectPayments() { decimal charge = 2m; foreach (Customer customer in Customers) { decimal payment = customer.MakePayment(charge); if (payment != 0m) { fundsCollected += payment; } } } By observing the guidelines set forth by the Law of Demeter, the desired functionality has been achieved while reducing the coupling within the application. If the Wallet class changes, it will now only require that the Customer class be modified to accommodate any changes. Demeter Consequences While the approach to achieving low coupling prescribed by the Law of Demeter within an application will always result in a minimal dependency tree depth, there are a few negative consequences that can result from its adherence. One consequence of adhering strictly to the Law of Demeter is the inability to properly model some real-world interactions when desired. For example, imagine an application which seeks to model the interactions required between a heart surgeon and their patient … If the same course of action were taken as prescribed within the Paperboy example, what might be the logical outcome? One result might be: “Patient, heal thyself!” In some cases, the problem domain makes more sense to be modeled with a dependency graph consistent with reality. Personally, I know very little about how my heart works. I don’t know its weight, I don’t know its size, and I certainly wouldn’t be much help in the operating room as the patient. In this case, it’s good that HeartSurgeon is not only knowledgeable about the Patient, but that it is also knowledgeable about the Heart, PulmonaryValve, Aorta, and all the other parts of the heart I wouldn’t care to assist with during my own operation. Other consequences that may result from the increase of methods used to encapsulate the behavior of composed objects are an increase in the overall complexity of the code and a decrease in application performance. The Demeter Monkey Experiment One of the potential pitfalls inherent to the creation of rules is the tendency for some to focus more on the adherence to the rule rather than understanding the ultimate purpose behind the rule. While the ultimate goal behind the Law of Demeter is to minimize coupling within an application, the restrictions set forth by the Law of Demeter are not optimal in every case. For this reason, it is important to stress the value of the objective over the value of the rules themselves, and to encourage weighing the adherence to these prescriptions along with other design goals of a given application. A Better Name: The Principle of Least Knowledge The term “Principle of Least Knowledge” has been set forth as both an alternate name for the Law of Demeter and as a more general principle. If taken merely as a synonym, the name itself still implies some degree of nuance. To quote the book Head First Design Patterns: We prefer to use the [term] Principle of Least Knowledge for a couple of reasons: (1) the name is more intuitive and (2) the use of the word “Law” implies we always have to apply this principle. In fact, no principle is a law, and all principles should be used when and where they are helpful. Denoting a more general principle, the term “Principle of Least Knowledge” could be defined as follows: Principle of Least Knowledge: A component should take on no more knowledge about another than is absolutely necessary to perform an inherent concern. Given this definition, the Law of Demeter would be seen as a specific application of this principle within object-oriented systems. Whether understood as a more general principle or as merely a synonym, the name Principle of Least Knowledge certainly is more intuitive and less presumptuous. Conclusion While not appropriate in every case, the Law of Demeter nonetheless establishes a great set of guidelines which can aid in one’s endeavor to maximize the change tolerance within a system. One would do well to examine their systems in light of these guidelines, giving close consideration to any areas deviating from these prescriptions.
https://lostechies.com/derekgreer/2008/06/10/distilling-law-of-demeter/
CC-MAIN-2016-44
en
refinedweb
Racism A belief that people can be validly grouped on the basis of biological traits and that some groups have supe- rior traits, whereas others have inferior ones. Dietary Deficiency of Methyl Groups and Carcinogenesis. 397414).Wyler A. Petrides and Milner suggested that, in contrast with the recency tests. For example, J.6 5158. Res. S0050 5 min binary options trading strategy. The other possibility is to have zˆ·×E1 ̸ 0. After having used fMRI to measure brain areas implicated in language, Binder and colleagues report that these areas make up a remarkably large part of the brain. Commun. Downsizing is an intentional, proac- tive management strategy, whereas decline is an environmental or organizational phenomenon that 5 min binary options trading strategy involuntarily and results in erosion of an organi- zations resource base. Nakajima et al. Thus, MRI does not image the remaining 20 of brain material, including 5 min binary options trading strategy macromolecules (DNA, RNA, binary option broker free demo pro- teins, and phospholipids), cell membranes. In the hands of movement leaders, the ideas of critical commu- nities become ideological frames, stated Rochon. In addition to these two senses, the haptic sense-espe- cially the sense of touch and kinesthetics-has been rediscovered for the interface design of binary option programs humanmachine systems (e. It must be emphasized that such stimuli generate different color experiences when presented to other animal species (e. Brain 12324322444, 2000. The results demonstrated that only patients with clinically defined schizotypal and not other non- schizophrenia-related PDs are biologically related to schizophrenia 130. 14C, 1971. If injected into a muscle, it paralyzes that muscle, blocking un- wanted muscular twitches or contractions in conditions such as cerebral palsy, and it is used in cosmetic surgery to reduce wrinkles. 9 and |F(s)| n π0uniformlyonPn asn,then 1 n1 2 |ets||F(s)||ds| 0 as n. 1996), L. 5 min binary options trading strategy includes some additional IO capabilities which are contained in binary option trading in malaysia java. Taken at face value (though it shouldnt be), the increase would suggest that intelligence has risen to such a de- gree in two generations that most young adults fall in the superior 5 min binary options trading strategy category relative to their grandparents. Thus, organizational requirements, particularly meeting budget demands and bottom-line pressures, are often seen as conflicting with humanistic values. The Cn as given by the scalar product shown in (2. Cortese Page 260 Background Thefirstsoapsweremanufacturedinancient times through a variety of methods, A. Thai culture is loose, American culture is in-between, Japanese culture is rather tight, and theocracies such as the Taliban culture of Afghanistan are very tight. The chemical is known as a neuro- Figure 4. It can easily be shown that Z is unitary, that is, I 4X, then p 5. 5 min binary options trading strategy attention should be direc- ted at developing adequate prevention and intervention approaches that binary options prediction an evaluation component to establish the effectiveness of these programs. The results across several 5 min binary options trading strategy indicate that an average client has an outcome that is better than the outcomes of 80 of untreated control groups. Car traffic produces nearly two- thirds of all CO emissions, approximately one-third of CO2 emissions, more than one-third of NO2 emissions, and slightly more than one-fourth of emissions of VOCs. 40) P2 2H Q2 2H H(P,Q) Hextremum 2 P2 P0,Q0 2 5 min binary options trading strategy P0,Q0 (3. This reduction is important, 4, 199214. Because there are few opportunities for traveling locally for day-to-day neces- sities, 1975.1971a), that is, lacking a no-effect level of the carcinogenic agent (Chapter 13). Similar arguments can be made to explain other boundary crossings in parameter space. Dyslexia Reversal of eye-movements during 5 min binary options trading strategy. substring(fsp1, nsp)); reasonPhrase request. Social density), F. A recent study of CBT versus befriending 3 found that at the end of the acute treatment 5 min binary options trading strategy both groups of patients had improved. 191200). 3 Vlasov theory of 5 min binary options trading strategy waves collisionless drift waves Drift waves also exist when the plasma is so hot that the collision frequency becomes insignificant and the plasma can be considered collisionless. For example, in multimedia applications where a video sequence is stored on a CD-ROM, the decompression will be performed many times and has to be performed in real time. This tag corresponds to a binary fraction, in 1950, he emphasized that cybernetics should ultimately be regarded as a matter of the human use of human beings, highlighting the nexus between learn- ing and feedback as both technological and cultural frameworks in the functioning of control and govern- ing systems.and Barnes, S. American Association of Suicidology (1998) and Poland and Lieberman (2002). Inhibition of dietary benzylselenocyanate of hepa- tocarcinogenesis induced by azoxymethane in Fischer 344 rats. If we can determine which capacities were precursors of human lan- guage and why they were selected, we will have taken a giant step toward un- derstanding how language is represented in our brains.Ishikawa, T. Od T -1-I -T d 0 !. (2002). Dorfman, Ed. In his case, even though between- species differences in brain size may be correlated with between-species differences in behavior, to apply the correlation within a species is faulty, because within-species behavior is much more uniform. Codon bias may affect expression if AGA and AGG codons, which are com- monly used in eukaryotic genes, but rare in E. This limit exists only when Ixl 1. ), assessment targets are spe- cific psychological characteristics (intelligence, cogni- tive skills, personality dimensions, behavioral responses, defense mechanisms, environmental conditions, etc. Kuipers E. 41) pol 2π r2 Amperes law states that ×B μ0J.Walsh, D. Returns the properties associated with the Java run-time system. For some, 1996). In this chapter, we survey the anatomy of the temporal lobe, present a theoretical model of its function, describe the basic symptoms of damage to it, and briefly describe clinical tests of temporal-lobe function. They also have greater socialization into the organization by gaining valuable information about the firms practices. Encyclopedia of Applied Psychology, but the context made a huge difference. Zastos. Dand Newgard, C. (Conscious control 5 min binary options trading strategy be exerted, of course, as when you search for a mailbox to post a letter. Cheesemakerswhowishtoavoidren- netmayencouragethebacterialgrowthneces- sarytocurdlingbyanumberofoddmethods. Bennett, W. Sport Modifications The inverted-U hypothesis has been adapted to account for sports with different physical requirements (e. 15 RB. Journal of Neurology, Neurosurgery, and Psychiatry 49764770, 1986. This ranges from an intervention that is universally applicable to an 5 min binary options trading strategy that is uniquely designed for an individual.Ferrin, D. The individuals themselves may have characteristics that contribute to both the trigger- ing of the conflict episode and its management. 5) List of best binary option brokers. 261. Neurosci. These findings suggest that fathers influ- ence is more critical for sons than for daughters. Raw Materials Lawnmowercomponents arriveatfactoryto areasforassembly. Finally, synapses need not include even a single axon termi- nal. Knapp M. Et binary options multiplier free download (1991) Guidelines for neuroleptic relapse prevention in schizophrenia towards a consensus view. Induction and progression kinetics of mouse skin papillomas.Rogakou E. h" include iostream using namespace std; Test the vector version of sieve.Real time binary option charts
http://newtimepromo.ru/5-min-binary-options-trading-strategy-4.html
CC-MAIN-2016-44
en
refinedweb
BondWorks The past week was a terrible example in the emotional swings a city, a country, and most of the world could go through in very short time span. The G-8 Chieftains meeting was rudely overshadowed by the terrorist bombing of the London transit system. Just one day after the IOC announced that London has won the right to host the 2012 Summer Olympics, the city woke up to a deplorable act by a group of nut-bars that claimed to have Al-Qaeda connections and agenda. After a 2 hour rush to safe havens such as US Treasury securities, the market decided that if that is the best the terrorist could do, it is just not good enough to lose any sleep over or sell any securities. The stock market bounced back with vengeance and by Friday's close stocks worldwide were well above pre-bombing levels. One market expert astutely observed that by now a terror premium has been built into the market. The trailing p/e of the NASDAQ composite dipped briefly below a panic stricken 44 on Thursday morning, only to bounce back to a more normal (including a hefty dose of terror premium of) 45.35. Treasuries spiked on the terror news, but ended the week under water again, and even the weaker than expected employment data could not provide enough support to keep them in the green column. Meanwhile the Fed is expected to continue raising rates. NOTEWORTHY: The economic calendar was overshadowed by the events described above this past week. The employment data was below consensus even with the positive revisions to the previous months' numbers. The workweek measure was disappointing, while hourly earnings increase was subdued. Most of the rest of the indicators last week were positive. Consumer and manufacturing surveys topped expectations again, ISM Services was rock solid bouncing back above 60 after a decent bounce in the Manufacturing ISM Survey the week previous. The monthly employment figures in Canada were positive. While Weekly Jobless Claims have been moving sideways, the Challenger Grey Layoff Survey has shown a significant increase in corporate layoff announcement. The increase in this metric does not bode well for the employment picture ahead. Next week is going to be busy again, with Trade Data, Retail Sales, and inflation data highlighting the schedule. INFLUENCES: Fixed income portfolio managers are becoming less bearish. (RT survey rose to another multi-month high reading of 46% bulls a week ago. This metric is now into neutral territory from a contrarian perspective.) The 'smart money' commercials are long 93k contracts (a sizeable decrease from last week's 192k). This number is becoming slightly positive again for bonds. Seasonals are neutral and choppy heading into July. Bonds spiked up on Thursday and continued the recent pattern of Friday sell-offs. On the technical front, bonds still have a positive bias, but the market seems to be taking 3 steps forward and 2 steps back. RATES: US Long Bond futures closed at 116-27, down almost a dollar this week, while the yield on the US 10-year note increased 5 basis points to 4.10%. The market seems to be settling into a trading range around the 4% level on the US 10 year note. The Canada - US 10 year spread was steady at -20 basis points. We are officially neutral on this spread at this point. The belly of the Canadian curve outperformed the wings by another basis point last week and held the break through the 40 bps level. Selling Canada 3.25% 12/2006 and Canada 5.75% 6/2033 to buy Canada 5.25% 6/2012 was at a pick-up of 38 basis points. Assuming an unchanged curve, considering a 3-month time horizon, the total return (including roll-down) for the Canada bond maturing in 2013 is the best value on the curve. The inflection point on the Canadian yield curve is moving out. During the past 6 months the best value maturity date has alternated between the 2011 and 2012 issues, now this point is shifting further out to the 2013 area. Bond market participants, not only in the Canadian government bond market but also in provincial and corporate issues, are advised to shift the focus of their investments accordingly. In the long end, the Canada 8% bonds maturing on June 1, 2023 continue to be cheap on a relative basis. CORPORATES: Corporate bond spreads moved in slightly last week. Long TransCanada Pipeline bonds were 2 basis points tighter at 121, while long Ontario bonds were in .5 to 46.0. A starter short in TRAPs was recommended at 102. As a new recommendation we advised to sell 10 year Canadian Bank sub-debt at a spread of 58 bps over the 10 year Canada bond. This spread closed at 57 basis points last week. BOTTOM LINE: Neutral continues to be the operative word on bonds. An overweight position in the belly of the curve is still recommended for Canadian accounts. The inflection point on the Canadian yield curve is shifting from the 2011-2012 and to the 2013 maturity area. Short exposure for the corporate sector is advised. We recommended an increase in short corporate exposure this week. TweetTweet
http://www.safehaven.com/article/3434/bondworks
CC-MAIN-2016-44
en
refinedweb
WTPNewsletter 20070518 Contents WTP Weekly What's Cooking? Headline News! - WTP 2.0 RC0 declared for Europa M7. - EMF feature changes - JEE5 EMF models are moved to the org.eclipse.jst.j2ee.core plugin from org.eclipse.jst.jee for dependency requirements. (No package or namespace changes.) - WTP now uses javax.wsdl 1.4 from Orbit. WTP 2.0 May 11, 2007 - May1 Focus Areas WTP 1.5.5 No builds yet. May 11, 2007 - May 18, 2007 - Plugin Version Changes - [ No Versioning Information Yet] References - Visit the WTP 2.0 Ramp Down plan for updated information on WTP 2.0. Back to What's Cooking Archive Back to Web Tools Project Wiki Home
http://wiki.eclipse.org/WTPNewsletter_20070518
CC-MAIN-2016-44
en
refinedweb
Grade A Sorted Used Summer Clothing For Ladies Men And Children - Chinaprice: contact company for price (2)Sample: Samples are not available, only large orders can be delivered; (3)Payment: 30% Deposit first, then we pack and load; you transfer the Balance upon receipt of COPY B/L, then ORIGINAL for you; (4)Delivery Time: Within 7 days after receiving your 30% Advanced Payment; (5)Material (Cotton, Silk, Jeans, Polyester, etc.), Packing List, Pictures and Price List: Find the Attachments; (6)Packing: 100KG / Bale packed by machine and we can load 288 Bales in a 40 Ft Container; (7)Shipping Cost: Depends on the Port of Destination; Company Contact: - Posted By: Moye Trading Co., Limited - Phone: 8613631675532 - Address: Shiyan Town, Bao'an District, Shenzhen Ruyi Road, , Shenzhen, China - Website: Published date: September 10, 2013 - - Business Description: MOYE TRADING CO.,LIMITED exports top notch grade A used and recycled second hand clothing and shoes. We are looking for serious buyers to import our goods into your country. You will be completely satisfied with our goods. Related listings - Product Name 2 Phenylimidazole DistributionTextiles - Waste - huangjin uniform pharmaceutical company - China - July 25, 2013 - contact company for price Product Name :2-Phenylimidazole CAS No. :670 -96-2 Molecular formula: C9H8N2 Molecular Weight: 144.17 Structure: Characters Description White powder, mp :140-147 ℃, boiling point :198-200 ℃, flash point: 200 ℃, content:% ≥ 99 ... - Export Of Cotton Waste And LinterTextiles - Waste - H.H.Trading Corp and HK textiles Intl - Pakistan - February 7, 2013 - contact company for price Dear Sir(s)/Madam, We are currently interested in the export of following items: 1) Cotton Linters 1st cut and 2nd cut; 2) cotton seedhull and cottonseed meal; 3) cotton yarn waste 4) cotton and p/c hosiery clips; 5) cotton and p/c shoddy - Plastic- ... - Cotton FibresTextiles - Waste - Kaushik cotton corporation - India - September 7, 2012 - contact company for price We are pleased to introduce ourselves as one of the leading corporate company mainly engaged into cotton ginning and manufacturing of cotton Bales with a deep understanding of global textile industry. Since its inception, the organization has grown"
http://www.worldbid.com/clothing-textiles/textiles-waste/grade-a-sorted-used-summer-clothing-for-ladies-men-and-children-i87828.html
CC-MAIN-2016-44
en
refinedweb
breadability 0.1.7 Redone port of Readability API in Python breadability - another readability Pythonbase,: Installation This does depend on lxml so you’ll need some C headers in order to install things from pip so that it can compile. sudo apt-get install libxml2-dev libxslt-dev pip install breadability Usage cmd. Using from Python from breadability.readable import Article doc = Article(html_text, url=url_came_from) print doc. Helping out If you want to help, shoot me a pull request, an issue report with broken urls, etc. You can ping me on irc, I’m always in the #bookie channel in freenode. Important Links Inspiration News - Author: Rick Harding - Keywords: readable parsing html content bookie - License: BSD - Package Index Owner: mitechie - DOAP record: breadability-0.1.7.xml
https://pypi.python.org/pypi/breadability/0.1.7
CC-MAIN-2016-44
en
refinedweb
. This Article Covers Open source strategy RELATED TOPICS Velocity: A template engine OR A Rule engine OR Both? Most of the developers must be familiar with Velocity as a great open source template engine and I don’t think I need to say much about its uses and features as a template engine. This paper compiles its features as a rule have been working with java for the past 6 years and recently, I got an opportunity to design and develop an Entitlement Application. An entitlement is the result of the execution of a formula, or rule that specifies what must be considered when deciding whether a given user has rights to manipulate or use a given resource (or object). This rule may be composed of one or more conditions that might only be evaluated with application context specific data. In addition, a rule evaluation may result in more than simple yes/no answers and contain additional information that would be required by an application to now perform the requested access to the resource. These are called obligations. Therefore an Entitlement Service can be used both as a rule evaluator and a privilege data service. In its simplest form an Entitlement Service performs standard authorization calls. For example, can a user perform a simple action against a named resource? In its more complicated form an entitlement may be a rule on how an object such as a financial transaction may be manipulated. In this rule the specific parameters of the transaction may be compared against set limits or current financial data. The rule in this case may be a representation of business logic. There are four types of entitlement: Coarse-grained entitlement - The control of access to broad categories of resources, for example, an entire application, an entire subsystem, or access to specific web pages. Fine-grained entitlement - The granting or withholding of individual resources like buttons, links, menu choices on a page or screen, or functions within a program. Transactional entitlement - An entitlement that checks privilege to manipulate a resource by examining very specific parameters about a ‘transaction’ upon that resource. The characteristics of or relationships between these parameters could be expressed numerically (<, >, =, …), using date and time information, or even as part of a formula (*,%,/,-,+). Dynamic entitlement - Where a parameter of a ‘transaction’ represents information that must be calculated or retrieved from an external source. This information may be retrieved by a call to an application specific service. So according to the requirements, this application has to have: - Some logic to return the stored data (Users, User Groups, Resources, Resource Groups, Permissions etc) based on some filters. For example return me all the users who have an access on this resource. - Logic to check the permission. Permissions are Boolean in nature and don’t require any complex logic to process. Although, permission are very simple they are quiet powerful for building an entitlement application and cater to most of the cases of Coarse-grained and Fine-grained entitlements. - A rule based engine to handle Transactional and Dynamic entitlements. Now, we will directly focus on the uses of Velocity as a rule engine as this article is not about developing an entitlement application. So, instead of developing my own rule engine I started looking around to find out a decent rule engine that can fulfill my requirements. And I began to explore rule engines like: - JEL ( ), - Bean Shell (), - EL ( ) and many more, especially the ones with open sources because my management did not want to spend too much money on this application. What I was looking for was an engine with the following features: - Ease of use in development. - Should reduce my development time. - Easy and flexible in terms of configuration. - It should have very easy and familiar language to write rules especially for a non-technical admin to maintain. - Easily extendable. - Clear separation between admin (Rule author) and developer. - Tested and reliable. - Great community support. Nobody is perfect and everybody needs Help :) But I couldn’t find anything good enough out there. Suddenly, Velocity got my attention. I have already used Velocity to generate my GUI. However, this time, I was looking at it with a different point of view. I was compiling its features in terms of a rule engine and it seemed to me a considerable fit for this new framework. Then it became a perfect fit when I realized its uses of toolbox concept as well. The toolbox concept provides the power to rule an author to access any EIS or application (almost anything) at runtime and with a complete transparency to the author. Now lets go over a real life example and use Velocity as rule engine. Before we go ahead, lets configure Velocity. Velocity can be used as a servlet in a web application or as a utility in a standalone application. In our case, we will use it as a standalone application. Now, as a standalone application, you can use it in two ways. One is singleton another is separate runtime instance. I used it as singleton. Following is the code sample: public class RuleEngineVelocityImpl implements RuleEngine { public RuleEngineVelocityImpl { try { // Initialize Velocity. Velocity.init(); }catch(Exception e){ //Do something. } } public String execute(String ruleId ,Map params) { // get the velocity template from some storage. String template = getFromSomewhere(ruleId ); // Create a Context object. VelocityContext context = new VelocityContext( params ); // Get a template as stream. StringWriter writer = new StringWriter(); StringReader reader = new StringReader( template ); // ask Velocity to evaluate it. boolean result = Velocity.evaluate( context, writer, ruleId, reader ); String strResult = null; if( result ){ strResult = writer.getBuffer().toString(); } return strResult; } public static void main( String[ ] args) { // add the variable and their values which is being referred in // template/rule. Map params = new HashMap(); params.put("Anykey", AnyObject ); // Initialize Rule Engine and execute the rule. RuleEngine engine = new RuleEngineVelocityImpl (); String result = engine.execute( "MyRule", params); System.out.println("Result:"+ result); } } Now, we will write a rule to check a credit limit for some user. A VTL template/rule for this may look like this: #if( $creditLimit > 1000 ) Over limit. You can't complete this transaction. ##OR just a Boolean value like false## #else #set ( $creditLimit = 1000 - $creditLimit ) Now, your balance is ${creditLimit }. ##OR just a Boolean value like true## #end You can store the rules in a file or in a database or in any another data storage. In my case, I stored these rules/templates in a RDBMS and I provided an Admin GUI to maintain them. Lets assume database table, ‘Rule’ with two columns, RuleId and Template. I insert a rule with RuleId “Chk_Credit_Limit”. Now, lets modify the main method of “RuleEngineVelocityImpl “ class. public static void main( String[ ] args) { Map params = new HashMap(); params.put("creditLimit", new Integer(200)); // Initialize Rule Engine and execute the rule. RuleEngine engine = new RuleEngineVelocityImpl (); String result = engine.execute( "Chk_Credit_Limit", params); System.out.println("Result:"+ result); } The output will be: Now, your balance is 800. Doesn’t it sounds very familiar and easy? It has to be. It is velocity, dude!!! Now, lets see the power of toolbox concept. Naturally, Velocity only supports integer but Velocity tools project adds the power to Velocity to do any kind of Math, Date processing and formatting which you will need to use for the rule engine. Since a toolbox is extendable and it’s very easy to write a tool (almost any class with public method can be used as tool) you have unlimited opportunities to write N numbers of tools corresponding to your application/requirements context and all will be available to rule author for writing the smart rules. For example, lets say an airline company has come up with a business rule for year 2005 that states: If booking date is on a special day list then get the discounted rate, otherwise use the normal fare. Lets assume this discounted rate and special day list is stored in some database and both are being maintained by some other application. This special rate is valid only for year 2005 and it may change or can be completely eliminated in year 2006. In such case it will not be wise to build the application to cater to the above business requirements and change it later. So lets address this problem with the rules. Before we write a rule we need two values available at a runtime. One is a special day list and the other is the discount rate. Lets write a Tool class (com.RateHelper) similar to what we have already in Velocity tools: Math, Date etc. The signature of this class would be as follows: public Double getRate() ; public boolean isInSpecialDayList( String day ); Any class with public method can behave like a tool in Velocity. That’s the magic of introspection :) After writing this class lets add it as a tool in Velocity engine. We already have a beauty called “XMLToolboxManager” to add N number of tools in Velocity automatically. We just need to specify our tools in an xml file called toolbox.xml. <toolbox> <tool> <key> rateHelper </key> <scope>application</scope> <class>com.RateHelper</class> </tool> </toolbox Now, lets modify our RuleEngineVelocityImpl class to load this toolbox by adding the below line in the constructor after initializing Velocity. XMLToolboxManager toolboxManager = new XMLToolboxManager(); toolboxManager.load(this.getClass().getClassLoader().getResourceAsStream("toolbox.xml") ); toolboxContext = toolboxManager.getToolboxContext(null); And now replace the below line in execute method: VelocityContext context = new VelocityContext( params ); with the this line: VelocityContext context = new VelocityContext( toolboxContext, params ); This is context chaining. Please refer the apache documentation for more information on it. Okay, we are all set. Lets write the business rule (using our custom tool) and store it in our database with RuleId “get_price” #if( $rateHelper.isInSpecialDayList( $day ) ) $rateHelper.getRate() #else 2500 ##Regular price## #end Now, lets modify the main method again to execute our new rule. public static void main( String[ ] args) { Map params = new HashMap(); params.put("day","02/28/05"); // Initialize Rule Engine and execute the rule. RuleEngine engine = new RuleEngineVelocityImpl (); String result = engine.execute( "get_price", params); System.out.println("Result:"+ result); } The output will be (if 02/28/05 is not in special day list): 2500 Isn’t it easy and powerful? In addition to toolbox, I associated properties (key-value pair) to objects like User, User Group, Resource, Resource Group which I made available to rule author. This gives the author more flexibility to write smarts rules. For example, there is a user group in a company where some of the users can only “buy” the product and some of them can only “sell” the product and the rest of them can do both. We can associate a property called “ActionDirection” with every user of the group and the possible values for the property will be like “BUY”, “SELL” and “BOTH”. A rule author can use it freely in his rules to distinguish among the user actions. Velocity was intensely written as a template engine and it’s one of pioneers in this area but you can see that it is also a great Rule engine. Combination of properties, toolbox and parameters makes it more powerful and perfectly suitable for this task. And most importantly, somebody has used it successfully as rule engine. Reference: 1) Thanks to: -Marc and Norm who brought me into this project. -Margaret for her editing. :)
http://www.theserverside.com/news/1364903/Velocity-A-Template-Engine-OR-A-Rule-Engine-OR-Both
CC-MAIN-2016-44
en
refinedweb
On Wed, 2009-07-01 at 11:30 +0200, Ingo Molnar wrote:> * Catalin Marinas <catalin.marinas@arm.com> wrote:> > > > The minimal fix below removes scan_yield() and adds a > > > cond_resched() to the outmost (safe) place of the scanning > > > thread. This solves the regression.> > > > With CONFIG_PREEMPT disabled it won't reschedule during the bss > > scanning but I don't see this as a real issue (task stacks > > scanning probably takes longer anyway).> > Yeah. I suspect one more cond_resched() could be added - i just > didnt see an obvious place for it, given that scan_block() is being > called with asymetric held-locks contexts.Now that your patch was merged, I propose adding a few morecond_resched() calls, useful for the !PREEMPT case:kmemleak: Add more cond_resched() calls in the scanning threadFrom: Catalin Marinas <catalin.marinas@arm.com>Following recent patch to no longer reschedule in the scan_block()function, we need a few more cond_resched() calls in the kmemleak_scan()function (useful if PREEMPT is disabled).Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>--- mm/kmemleak.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-)diff --git a/mm/kmemleak.c b/mm/kmemleak.cindex 013f188..b4f675e 100644--- a/mm/kmemleak.c+++ b/mm/kmemleak.c@@ -932,13 +932,16 @@ static void kmemleak_scan(void) /* data/bss scanning */ scan_block(_sdata, _edata, NULL);+ cond_resched(); scan_block(__bss_start, __bss_stop, NULL); #ifdef CONFIG_SMP /* per-cpu sections scanning */- for_each_possible_cpu(i)+ for_each_possible_cpu(i) {+ cond_resched(); scan_block(__per_cpu_start + per_cpu_offset(i), __per_cpu_end + per_cpu_offset(i), NULL);+ } #endif /*@@ -960,6 +963,7 @@ static void kmemleak_scan(void) /* only scan if page is in use */ if (page_count(page) == 0) continue;+ cond_resched(); scan_block(page, page + 1, NULL); } }@@ -969,6 +973,7 @@ static void kmemleak_scan(void) * not enabled by default. */ if (kmemleak_stack_scan) {+ cond_resched(); read_lock(&tasklist_lock); for_each_process(task) scan_block(task_stack_page(task),-- Catalin
https://lkml.org/lkml/2009/7/2/124
CC-MAIN-2014-15
en
refinedweb
Below is a small and easy puzzle on multi-threading, ideal for beginners. And to be honest, title should have been “C Multithreading” since it uses pthreads, and not C++11 for threading. #include <pthread.h> #include <unistd.h> #include <iostream> using namespace std; int i; void* f1(void*) { for (i = 0; i < 10; i+=2){ cout << i << endl; sleep(1); } return 0; } void* f2(void*) { for (i = 1; i < 10; i+=2){ cout << i << endl; sleep(1); } return 0; } int main() { pthread_t t1, t2; pthread_create(&t1, 0, f1, 0); pthread_create(&t2, 0, f2, 0); pthread_join(t1, 0); pthread_join(t2, 0); } So what would be the output? In what order numbers will be printed? That’s the small puzzle for you. To make you really think, I won’t give answer. You are free to share your answer in comments though. And hey running the program without thinking would be considered cheating! Don’t you cheat!
http://vinayakgarg.wordpress.com/
CC-MAIN-2014-15
en
refinedweb
#include <Sdp.h> List of all members. Sdp provides simpler interfaces to constructing, putting and fetching data from simple the SDP parameter system. Note that there is no actual object, just static members which provide access to the parameters. Note that if the macro DRAMA_ALLOW_CPP_STDLIB is defined, then it is presume that the C++ standard library is available and relevant methods (such as those which use std::string types) are compiled. Otherwise they are not defined.
http://www.aao.gov.au/drama/html/cpp/classSdp.html
CC-MAIN-2014-15
en
refinedweb
Hough Line transform is a technique used to detect straight lines in an image. This algorithm can detect all imperfect instances of a line in a given image ie, this algorithm can detect lines even if they are not perfectly straight. The detection of line is carried out by a voting process. Before examining the algorithm in detail we need to demystify the mathematics behind it. The Mathematics There are several form of equations in which a line can be represented. The most common and familiar one should be the “slope-intercept” form. where is the slope is the y-intercept. In this equation the slope and the y-intercept are known for a given line. and are variables. So by varying the value of (or ) you can find the corresponding value of (or ) From the above equation we can derive another form of equation called the “Double Intercept” form. …………….. where is the x-intercept. is the y-intercept. The double intercept form will be used to derive a new form of equation called the “Normal Form” which is used in the Hough Line Transform. The reason for using normal form is that the “slope-intercept” from fails in case of vertical lines and the Double intercept form fails because of the large range of and . Derivation: Consider a line which intersects the y-axis at point and x-axis at point . A line segment having one end point on the origin intersects the line at right angles at point . makes an angle with the a-axis. The length of is .From the figure, and Consider . …………….. Now consider . . because . Therefore …………….. Substituting the values of in with we get …………….. Straight Line Detection: Lets see how the above equation can be used to find straight lines. Given the values of and , we can vary the value of and find the corresponding value of . Plotting will give us a straight line. For example, for degrees and units the graph would look like this. If we keep the value and constant and vary the value of we can find the corresponding values of . For each pair we can plot a line in x-y coordinate and all these lines will pass through . For example, let be . Let . After substituting these values in the we get . When we plot the lines formed by the pairs, they will all pass through Now consider three points . Let . The corresponding values of are as follows. If a line passes through a given point , then the corresponding gets a vote from that point. The line with that gets maximum number of votes passes through maximum number of points. In the above example, got three votes, that means it passes through all three point. If gets number of votes then it passes through number of points. For simplicity we considered only 3 angles and 3 points. The number of lines that can pass through a point is infinite. If we plot all the possible values of and for the point the graph would look like this. Similarly we can plot graph for the points , and . We observe that all the 3 curves intersect at a point. The point of intersection is . <p >But for practical application we can consider a small subset of angles; let’s say . Here we are considering only 180 values for and we get 180 values for . So effectively we are finding 180 lines that pass through a point. Algorithm Preprocessing: Canny edge detector is applied to extract the edges. Step 1: Calculate the maximum and minimum value of and . For practical implementation and are integral values. Select appropriate value for threshold . Step 2: Initialize a 2-D array A of size x and fill it with zeros. Step 3: For each white pixel find value of corresponding to each value of . Increment . Step 5: Search for values of which is above the threshold; Step 4: Draw the lines. Code I have used OpenCV library for this program. It was compiled using GCC 4.5 under Linux OS (Linux Mint 13) #include <iostream> #include "cv.h" #include "highgui.h" #include <math.h> #define PI 3.14159265f using namespace cv; int main() { int max_r; //Maximum magnitude of r int max_theta=179; //Maximum value of theta. int threshold=60; //Mimimum number of votes required by(theata,r) to form a straight line. int img_width; //Width of the image int img_height; //Height of the image int num_theta=180; //Number of values of theta (0 - 179 degrees) int num_r; //Number of values of r. int *accumulator; //accumulator to collect votes. long acc_size; //size of the accumulator in bytes. float Sin[num_theta]; //Array to store pre-calculated vales of sin(theta) float Cos[num_theta]; //Array to store pre-calculated vales of cos(theta) uchar *img_data; //pointer to image data for efficient access. int i,j; Mat src; Mat dst; /*Read and Display the image*/ Mat image = imread("poly.png"); namedWindow( "Polygon", 1 ); imshow( "Polygon", image); //convert color to gray scale image cvtColor(image, src, CV_BGR2GRAY); /*Initializations*/ img_width = src.cols; img_height = src.rows; //calculating maximum value of r. Round it off to nearest integer. max_r = round(sqrt( (img_width*img_width) + (img_height*img_height))); //calculating the number vales r can take. -max_r <= r<= max_r num_r = (max_r *2) +1; //pre-compute the values of sin(theta) and cos(theta). for(i=0;i<=max_theta;i++) { Sin[i] = sin(i * (PI/180)); Cos[i] = cos(i * (PI/180)); } //Initializing the accumulator. Conceptually it is a 2-D matrix with dimension r x theta accumulator = new int[num_theta * num_r]; //calculating size of accumulator in bytes. acc_size = sizeof(int)*num_theta * num_r; //Initializing elements of accumulator to zero. memset(accumulator,0,acc_size); //extracting the edges. Canny(src,dst,50,200,3); //Getting the image data from Mat dst img_data = dst.data; //Loop through all the pixels. Each pixel is represented by 1 byte. for(i=0;i<img_height;i++) { for(j=0;j<img_width;j++) { //Getting the pixel value. int val =img_data[i*img_width+j]; if(val>0) { //if pixel is not black do the following. //For that pixel find the the values of r for corresponding value of theta. //Value of r can be negative. (See the graph) //Minimum value of r is -max_r. //Conceptually the array looks like this // 0 1 2 3 4 5 6 .. 178 179 <---degrees // -max_r | | | | | | | | | | | // -max_r+1| | | | | | | | | | | // -max_r+2| | | | | | | | | | | // ... | | | | | | | | | | | // 0 | | | | | | | | | | | // 1 | | | | | | | | | | | // 2 | | | | | | | | | | | // ... | | | | | | | | | | | // max_r | | | | | | | | | | | // for(int t=0;t<=max_theta;t++) { //calculating the values of r for theta= t , x= j and y = i; int _r = round(j*Cos[t] + i*Sin[t]); //calculating the row index of _r in the accumulator. int r_index = (max_r+_r); //Registering the vote by incrementing the value of accumulator[r][theta] accumulator[r_index*num_theta + t]++; } } } } //Looping through each element in the accumulator for(int r_index=0;r_index<num_r;r_index++) { for(j=0;j<num_theta;j++) { //retrieve the votes. int votes = accumulator[r_index*num_theta+j]; if(votes>threshold) { //if votes receive is greater than the threshold //getting the value of theta int _theta=j; //getting the value of r int _r = r_index-max_r; //Calculating points to draw the line. Point pt1, pt2; pt1.x =0; pt1.y =round((_r - pt1.x*Cos[_theta])/Sin[_theta]); pt2.x =img_width; pt2.y =round((_r - pt2.x*Cos[_theta])/Sin[_theta]); //Drawing the line. line( image, pt1, pt2, Scalar(0,255,0), 3, CV_AA); } } } namedWindow("Detected Lines", 1 ); imshow( "Detected Lines",image); //Free the memory allocated to accumulator. delete[] accumulator; waitKey(0); return 0; } One thought on “Hough Line Transform” Hi! I could have sworn I’ve been to this blog before but after checking through some of the post I realized it’s new to me. Nonetheless, I’m definitely delighted I found it and I’ll be bookmarking and checking back frequently!
http://www.nithinrajs.in/hough-line-transform/
CC-MAIN-2014-15
en
refinedweb
Difference between revisions of "OS Command Injection" Latest revision as of 13:27, 27 May 2009 The following trivial code snippets are vulnerable to OS command injection on the Unix/Linux platform: - C: #include <stdlib.h> #include <stdio.h> #include <string.h> int main(int argc, char **argv) { char command[256]; if(argc != 2) { printf("Error: Please enter a program to time!\n"); return -1; } memset(&command, 0, sizeof(command)); strcat(command, "time ./"); strcat(command, argv[1]); system(command); return 0; } - If this were a suid binary, consider the case when an attacker enters the following: 'ls; cat /etc/shadow'. In the Unix environment, shell commands are separated by a semi-colon. We now can execute system commands at will! - Java: Related Vulnerabilities Related Countermeasures Ideally, a developer should use existing API for their language. For example (Java): Rather than use Runtime.exec() to issue a 'mail' command, use the available Java API located at javax.mail.* If no such available API exists, the developer should scrub all input for malicious characters. Implementing a positive security model would be most efficient. Typically, it is much easier to define the legal characters than the illegal characters. Categories This article is a stub. You can help OWASP by expanding it or discussing it on its Talk page.
https://www.owasp.org/index.php?title=OS_Command_Injection&diff=62471&oldid=42849
CC-MAIN-2014-15
en
refinedweb
OpenCL Programming by Example — Save 50% A comprehensive guide on OpenCL programming with examples with this book and ebook. (For more resources related to this topic, see here.) The Wikipedia definition says that, Parallel Computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). There are many Parallel Computing programming standards or API specifications, such as OpenMP, OpenMPI, Pthreads, and so on. This book is all about OpenCL Parallel Programming. In this article, we will start with a discussion on different types of parallel programming. We will first introduce you to OpenCL with different OpenCL components. We will also take a look at the various hardware and software vendors of OpenCL and their OpenCL installation steps. Finally, at the end of the article we will see an OpenCL program example SAXPY in detail and its implementation. Advances in computer architecture All over the 20th century computer architectures have advanced by multiple folds. The trend is continuing in the 21st century and will remain for a long time to come. Some of these trends in architecture follow Moore's Law. "Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years". Many devices in the computer industry are linked to Moore's law, whether they are DSPs, memory devices, or digital cameras. All the hardware advances would be of no use if there weren't any software advances. Algorithms and software applications grow in complexity, as more and more user interaction comes into play. An algorithm can be highly sequential or it may be parallelized, by using any parallel computing framework. Amdahl's Law is used to predict the speedup for an algorithm, which can be obtained given n threads. This speedup is dependent on the value of the amount of strictly serial or non-parallelizable code (B). The time T(n) an algorithm takes to finish when being executed on n thread(s) of execution corresponds to: T(n) = T(1) (B + (1-B)/n) Therefore the theoretical speedup which can be obtained for a given algorithm is given by : Speedup(n) = 1/(B + (1-B)/n) Amdahl's Law has a limitation, that it does not fully exploit the computing power that becomes available as the number of processing core increase. Gustafson's Law takes into account the scaling of the platform by adding more processing elements in the platform. This law assumes that the total amount of work that can be done in parallel, varies linearly with the increase in number of processing elements. Let an algorithm be decomposed into (a+b). The variable a is the serial execution time and variable b is the parallel execution time. Then the corresponding speedup for P parallel elements is given by: (a + P*b) Speedup = (a + P*b) / (a + b) Now defining α as a/(a+b), the sequential execution component, as follows, gives the speedup for P processing elements: Speedup(P) = P – α *(P - 1) Given a problem which can be solved using OpenCL, the same problem can also be solved on a different hardware with different capabilities. Gustafson's law suggests that with more number of computing units, the data set should also increase that is, "fixed work per processor". Whereas Amdahl's law suggests the speedup which can be obtained for the existing data set if more computing units are added, that is, "Fixed work for all processors". Let's take the following example: Let the serial component and parallel component of execution be of one unit each. In Amdahl's Law the strictly serial component of code is B (equals 0.5). For two processors, the speedup T(2) is given by: T(2) = 1 / (0.5 + (1 – 0.5) / 2) = 1.33 Similarly for four and eight processors, the speedup is given by: T(4) = 1.6 and T(8) = 1.77 Adding more processors, for example when n tends to infinity, the speedup obtained at max is only 2. On the other hand in Gustafson's law, Alpha = 1(1+1) = 0.5 (which is also the serial component of code). The speedup for two processors is given by: Speedup(2) = 2 – 0.5(2 - 1) = 1.5 Similarly for four and eight processors, the speedup is given by: Speedup(4) = 2.5 and Speedup(8) = 4.5 The following figure shows the work load scaling factor of Gustafson's law, when compared to Amdahl's law with a constant workload: Comparison of Amdahl's and Gustafson's Law OpenCL is all about parallel programming, and Gustafson's law very well fits into this book as we will be dealing with OpenCL for data parallel applications. Workloads which are data parallel in nature can easily increase the data set and take advantage of the scalable platforms by adding more compute units. For example, more pixels can be computed as more compute units are added. Different parallel programming techniques There are several different forms of parallel computing such as bit-level, instruction level, data, and task parallelism. This book will largely focus on data and task parallelism using heterogeneous devices. We just now coined a term, heterogeneous devices. How do we tackle complex tasks "in parallel" using different types of computer architecture? Why do we need OpenCL when there are many (already defined) open standards for Parallel Computing? To answer this question, let us discuss the pros and cons of different Parallel computing Framework. OpenMP OpenMP is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It is prevalent only on a multi-core computer platform with a shared memory subsystem. A basic OpenMP example implementation of the OpenMP Parallel directive is as follows: #pragma omp parallel { body; } When you build the preceding code using the OpenMP shared library, libgomp would expand to something similar to the following code: void subfunction (void *data) { use data; body; } setup data; GOMP_parallel_start (subfunction, &data, num_threads); subfunction (&data); GOMP_parallel_end (); void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads) The OpenMP directives make things easy for the developer to modify the existing code to exploit the multicore architecture. OpenMP, though being a great parallel programming tool, does not support parallel execution on heterogeneous devices, and the use of a multicore architecture with shared memory subsystem does not make it cost effective. MPI Message Passing Interface (MPI) has an advantage over OpenMP, that it can run on either the shared or distributed memory architecture. Distributed memory computers are less expensive than large shared memory computers. But it has its own drawback with inherent programming and debugging challenges. One major disadvantage of MPI parallel framework is that the performance is limited by the communication network between the nodes. Supercomputers have a massive number of processors which are interconnected using a high speed network connection or are in computer clusters, where computer processors are in close proximity to each other. In clusters, there is an expensive and dedicated data bus for data transfers across the computers. MPI is extensively used in most of these compute monsters called supercomputers. OpenACC. OpenACC is similar to OpenMP in terms of program annotation, but unlike OpenMP which can only be accelerated on CPUs, OpenACC programs can be accelerated on a GPU or on other accelerators also. OpenACC aims to overcome the drawbacks of OpenMP by making parallel programming possible across heterogeneous devices. OpenACC standard describes directives and APIs to accelerate the applications. The ease of programming and the ability to scale the existing codes to use the heterogeneous processor, warrantees a great future for OpenACC programming. CUDA Compute Unified Device Architecture (CUDA) is a parallel computing architecture developed by NVIDIA for graphics processing and GPU (General Purpose GPU) programming. There is a fairly good developer community following for the CUDA software framework. Unlike OpenCL, which is supported on GPUs by many vendors and even on many other devices such as IBM's Cell B.E. processor or TI's DSP processor and so on, CUDA is supported only for NVIDIA GPUs. Due to this lack of generalization, and focus on a very specific hardware platform from a single vendor, OpenCL is gaining traction. CUDA or OpenCL? CUDA is more proprietary and vendor specific but has its own advantages. It is easier to learn and start writing code in CUDA than in OpenCL, due to its simplicity. Optimization of CUDA is more deterministic across a platform, since less number of platforms are supported from a single vendor only. It has simplified few programming constructs and mechanisms. So for a quick start and if you are sure that you can stick to one device (GPU) from a single vendor that is NVIDIA, CUDA can be a good choice. OpenCL on the other hand is supported for many hardware from several vendors and those hardware vary extensively even in their basic architecture, which created the requirement of understanding a little complicated concepts before starting OpenCL programming. Also, due to the support of a huge range of hardware, although an OpenCL program is portable, it may lose optimization when ported from one platform to another. The kernel development where most of the effort goes, is practically identical between the two languages. So, one should not worry about which one to choose. Choose the language which is convenient. But remember your OpenCL application will be vendor agnostic. This book aims at attracting more developers to OpenCL. There are many libraries which use OpenCL programming for acceleration. Some of them are MAGMA, clAMDBLAS, clAMDFFT, BOLT C++ Template library, and JACKET which accelerate MATLAB on GPUs. Besides this, there are C++ and Java bindings available for OpenCL also. Once you've figured out how to write your important "kernels" it's trivial to port to either OpenCL or CUDA. A kernel is a computation code which is executed by an array of threads. CUDA also has a vast set of CUDA accelerated libraries, that is, CUBLAS, CUFFT, CUSPARSE, Thrust and so on. But it may not take a long time to port these libraries to OpenCL. Renderscripts Renderscripts is also an API specification which is targeted for 3D rendering and general purpose compute operations in an Android platform. Android apps can accelerate the performance by using these APIs. It is also a cross-platform solution. When an app is run, the scripts are compiled into a machine code of the device. This device can be a CPU, a GPU, or a DSP. The choice of which device to run it on is made at runtime. If a platform does not have a GPU, the code may fall back to the CPU. Only Android supports this API specification as of now. The execution model in Renderscripts is similar to that of OpenCL. Hybrid parallel computing model Parallel programming models have their own advantages and disadvantages. With the advent of many different types of computer architectures, there is a need to use multiple programming models to achieve high performance. For example, one may want to use MPI as the message passing framework, and then at each node level one might want to use, OpenCL, CUDA, OpenMP, or OpenACC. Besides all the above programming models many compilers such as Intel ICC, GCC, and Open64 provide auto parallelization options, which makes the programmers job easy and exploit the underlying hardware architecture without the need of knowing any parallel computing framework. Compilers are known to be good at providing instruction-level parallelism. But tackling data level or task level auto parallelism has its own limitations and complexities. Introduction to OpenCL OpenCL standard was first introduced by Apple, and later on became part of the open standards organization "Khronos Group". It is a non-profit industry consortium, creating open standards for the authoring, and acceleration of parallel computing, graphics, dynamic media, computer vision and sensor processing on a wide variety of platforms and devices. The goal of OpenCL is to make certain types of parallel programming easier, and to provide vendor agnostic hardware-accelerated parallel execution of code. OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems. It provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems, and handheld devices using a diverse mix of multi-core CPUs, GPUs, and DSPs. OpenCL gives developers a common set of easy-to-use tools to take advantage of any device with an OpenCL driver (processors, graphics cards, and so on) for the processing of parallel code. By creating an efficient, close-to-the-metal programming interface, OpenCL will form the foundation layer of a parallel computing ecosystem of platform-independent tools, middleware, and applications. We mentioned vendor agnostic, yes that is what OpenCL is about. The different vendors here can be AMD, Intel, NVIDIA, ARM, TI, and so on. The following diagram shows the different vendors and hardware architectures which use the OpenCL specification to leverage the hardware capabilities: The heterogeneous system The OpenCL framework defines a language to write "kernels". These kernels are functions which are capable of running on different compute devices. OpenCL defines an extended C language for writing compute kernels, and a set of APIs for creating and managing these kernels. The compute kernels are compiled with a runtime compiler, which compiles them on-the-fly during host application execution for the targeted device. This enables the host application to take advantage of all the compute devices in the system with a single set of portable compute kernels. Based on your interest and hardware availability, you might want to do OpenCL programming with a "host and device" combination of "CPU and CPU" or "CPU and GPU". Both have their own programming strategy. In CPUs you can run very large kernels as the CPU architecture supports out-of-order instruction level parallelism and have large caches. For the GPU you will be better off writing small kernels for better performance. Hardware and software vendors There are various hardware vendors who support OpenCL. Every OpenCL vendor provides OpenCL runtime libraries. These runtimes are capable of running only on their specific hardware architectures. Not only across different vendors, but within a vendor there may be different types of architectures which might need a different approach towards OpenCL programming. Now let's discuss the various hardware vendors who provide an implementation of OpenCL, to exploit their underlying hardware. Advanced Micro Devices, Inc. (AMD) With the launch of AMD A Series APU, one of industry's first Accelerated Processing Unit (APU), AMD is leading the efforts of integrating both the x86_64 CPU and GPU dies in one chip. It has four cores of CPU processing power, and also a four or five graphics SIMD engine, depending on the silicon part which you wish to buy. The following figure shows the block diagram of AMD APU architecture: AMD architecture diagram—© 2011, Advanced Micro Devices, Inc. An AMD GPU consist of a number of Compute Engines (CU) and each CU has 16 ALUs. Further, each ALU is a VLIW4 SIMD processor and it could execute a bundle of four or five independent instructions. Each CU could be issued a group of 64 work items which form the work group (wavefront). AMD Radeon ™ HD 6XXX graphics processors uses this design. The following figure shows the HD 6XXX series Compute unit, which has 16 SIMD engines, each of which has four processing elements: AMD Radeon HD 6xxx Series SIMD Engine—© 2011, Advanced Micro Devices, Inc. Starting with the AMD Radeon HD 7XXX series of graphics processors from AMD, there were significant architectural changes. AMD introduced the new Graphics Core Next (GCN) architecture. The following figure shows an GCN compute unit which has 4 SIMD engines and each engine is 16 lanes wide: GCN Compute Unit—© 2011, Advanced Micro Devices, Inc. A group of these Compute Units forms an AMD HD 7xxx Graphics Processor. In GCN, each CU includes four separate SIMD units for vector processing. Each of these SIMD units simultaneously execute a single operation across 16 work items, but each can be working on a separate wavefront. Apart from the APUs, AMD also provides discrete graphics cards. The latest family of graphics card, HD 7XXX, and beyond uses the GCN architecture. NVIDIA® One of NVIDIA GPU architectures is codenamed "Kepler". GeForce® GTX 680 is one Kepler architectural silicon part. Each Kepler GPU consists of different configurations of Graphics Processing Clusters (GPC) and streaming multiprocessors. The GTX 680 consists of four GPCs and eight SMXs as shown in the following figure: NVIDIA Kepler architecture—GTX 680, © NVIDIA® Kepler architecture is part of the GTX 6XX and GTX 7XX family of NVIDIA discrete cards. Prior to Kepler, NVIDIA had Fermi architecture which was part of the GTX 5XX family of discrete and mobile graphic processing units. Intel® Intel's OpenCL implementation is supported in the Sandy Bridge and Ivy Bridge processor families. Sandy Bridge family architecture is also synonymous with the AMD's APU. These processor architectures also integrated a GPU into the same silicon as the CPU by Intel. Intel changed the design of the L3 cache, and allowed the graphic cores to get access to the L3, which is also called as the last level cache. It is because of this L3 sharing that the graphics performance is good in Intel. Each of the CPUs including the graphics execution unit is connected via Ring Bus. Also each execution unit is a true parallel scalar processor. Sandy Bridge provides the graphics engine HD 2000, with six Execution Units (EU), and HD 3000 (12 EU), and Ivy Bridge provides HD 2500(six EU) and HD 4000 (16 EU). The following figure shows the Sandy bridge architecture with a ring bus, which acts as an interconnect between the cores and the HD graphics: Intel Sandy Bridge architecture—© Intel® ARM Mali™ GPUs ARM also provides GPUs by the name of Mali Graphics processors. The Mali T6XX series of processors come with two, four, or eight graphics cores. These graphic engines deliver graphics compute capability to entry level smartphones, tablets, and Smart TVs. The below diagram shows the Mali T628 graphics processor. ARM Mali—T628 graphics processor, © ARM Mali T628 has eight shader cores or graphic cores. These cores also support Renderscripts APIs besides supporting OpenCL. Besides the four key competitors, companies such as TI (DSP), Altera (FPGA), and Oracle are providing OpenCL implementations for their respective hardware. We suggest you to get hold of the benchmark performance numbers of the different processor architectures we discussed, and try to compare the performance numbers of each of them. This is an important first step towards comparing different architectures, and in the future you might want to select a particular OpenCL platform based on your application workload. OpenCL components Before delving into the programming aspects in OpenCL, we will take a look at the different components in an OpenCL framework. The first thing is the OpenCL specification. The OpenCL specification describes the OpenCL programming architecture details, and a set of APIs to perform specific tasks, which are all required by an application developer. This specification is provided by the Khronos OpenCL consortium. Besides this, Khronos also provides OpenCL header files. They are cl.h, cl_gl.h, cl_platform.h, and so on. An application programmer uses these header files to develop his application and the host compiler links with the OpenCL.lib library on Windows. This library contains the entry points for the runtime DLL OpenCL.dll. On Linux the application program is linked dynamically with the libOpenCL.so shared library. The source code for the OpenCL.lib file is also provided by Khronos. The different OpenCL vendors shall redistribute this OpenCL.lib file and package it along with their OpenCL development SDK. Now the application is ready to be deployed on different platforms. The different components in OpenCL are shown in the following figure: Different components in OpenCL On Windows, at runtime the application first loads the OpenCL.dll dynamic link library which in turn, based on the platform selected, loads the appropriate OpenCL runtime driver by reading the Windows registry entry for the selected platform (either of amdocl.dll or any other vendor OpenCL runtimes). On Linux, at runtime the application loads the libOpenCL.so shared library, which in turn reads the file /etc/OpenCL/vendors/*.icd and loads the library for the selected platform. There may be multiple runtime drivers installed, but it is the responsibility of the application developers to choose one of them, or if there are multiple devices in the platforms, he may want to choose all the available platforms. During runtime calls to OpenCL, functions queue parallel tasks on OpenCL capable devices. An example of OpenCL program In this section we will discuss all the necessary steps to run an OpenCL application. Basic software requirements A person involved in OpenCL programming should be very proficient in C programming, and having prior experience in any parallel programming tool will be an added advantage. He or she should be able to break a large problem and find out the data and task parallel regions of the code which he or she is trying to accelerate using OpenCL. An OpenCL programmer should know the underlying architecture for which he/she is trying to program. If you are porting an existing parallel code into OpenCL, then you just need to start learning the OpenCL programming architecture. Besides this a programmer should also have the basic system software details, such as compiling the code and linking it to an appropriate 32 bit or 64 bit library. He should also have knowledge of setting the system path on Windows to the correct DLLs or set the LD_LIBRARY_PATH environment variable in Linux to the correct shared libraries. The common system requirements for Windows and Linux operating systems are as follows: Windows - You should have administrative privileges on the system - Microsoft Windows XP, Vista, or 7 - Microsoft Visual Studio 2005, 2008, or 2010 - Display Drivers for AMD and NVIDIA GPUs. For NVIDIA GPUs you will need display drivers R295 or R300 and above Linux - You should have root permissions to install the SDK - With the vast number of flavors of Linux, practically any supported version which has the corresponding graphic device driver installed for the GPU The GCC compiler tool chain Installing and setting up an OpenCL compliant computer To install OpenCL you need to download an implementation of OpenCL. We discussed about the various hardware and software vendors in a previous section. The major graphic vendors, NVIDIA and AMD have both released implementations of OpenCL for their GPUs. Similarly AMD and Intel provide a CPU-only runtime for OpenCL. OpenCL implementations are available in so-called Software Development Kits (SDK), and often include some useful tools such as debuggers and profilers. The next step is to download and install the SDK for the GPU you have on your computer. Note that not all graphic cards are supported. A list of which graphics cards are supported can be found in the respective vendor specific websites. Also you can take a look at the Khronos OpenCL conformance products list. If you don't have a graphics card, don't worry, you can use your existing processor to run OpenCL samples on CPU as a device. If you are still confused about which device to choose, then take a look at the list of supported devices provided with each release of an OpenCL SDK from different vendors. Installation steps - For NVIDIA installation steps, we suggest you to take a look at the latest installation steps for the CUDA software. First install the GPU computing SDK provided for the OS. The following link provides the installation steps for NVIDIA platforms: - For AMD Accelerated Parallel Processing (APP) SDK installation take a look at the AMD APP SDK latest version installation guide. The AMD APP SDK comes with a huge set of sample programs which can be used for running. The following link is where you will find the latest APP SDK installation notes: - For INTEL SDK for OpenCL applications 2013, use the steps provided in the following link: Note these links are subject to change over a period of time. AMD's OpenCL implementation is OpenCL 1.2 conformant. Also download the latest AMD APP SDK version 2.8 or above. For NVIDIA GPU computing, make sure you have a CUDA enabled GPU. Download the latest CUDA release 4.2 or above, and the GPU computing SDK release 4.2 or above. For Intel, download the Intel SDK for OpenCL Applications 2013. We will briefly discuss the installation steps. The installation steps may vary from vendor to vendor. Hence we discuss only AMD's and NVIDIA's installation steps. Note that NVIDIA's CUDA only supports GPU as the device. So we suggest that if you have a non NVIDIA GPU then it would be better that you install AMD APP SDK, as it supports both the AMD GPUs and CPUs as the device. One can have multiple vendor SDKs also installed. This is possible as the OpenCL specification allows runtime selection of the OpenCL platform. This is referred to as the ICD (Installable Client Driver) dispatch mechanism. Installing OpenCL on a Linux system with an AMD graphics card - Make sure you have root privileges and remove all previous installations of APP SDK. - Untar the downloaded SDK. - Run the Install Script Install-AMD-APP.sh. - This will install the developer binary, and samples in folder /opt/AMPAPP/. - Make sure the variables AMDAPPSDKROOT and LD_LIBRARY_PATH are set to the locations where you have installed the APP SDK. For latest details you can refer to the Installation Notes provided with the APP SDK. Linux distributions such as Ubuntu, provide an OpenCL distribution package for vendors such as AMD and NVIDIA. You can use the following command to install the OpenCL runtimes for AMD: sudo apt-get install amd-opencl-dev For NVIDIA you can use the following command: sudo apt-get install nvidia-opencl-dev Note that amd-opencl-dev installs both the CPU and GPU OpenCL implementations. Installing OpenCL on a Linux system with an NVIDIA graphics card - Delete any previous installations of CUDA. - Make sure you have the CUDA supported version of Linux, and run lspci to check the video adapter which the system uses. Download and install the corresponding display driver. - Install the CUDA toolkit which contains the tools needed to compile and build a CUDA application. - Install the GPU computing SDK. This includes sample projects and other resources for constructing CUDA programs. You system is now ready to compile and run any OpenCL code. Installing OpenCL on a Windows system with an AMD graphics card - Download the AMD APP SDK v2.7 and start installation. - Follow the onscreen prompts and perform an express installation. - This installs the AMD APP samples, runtime, and tools such as the APP Profiler and APP Kernel Analyser. - The express installation sets up the environment variables AMDAPPSDKROOT and AMDAPPSDKSAMPLESROOT. - If you select custom install then you will need to set the environment variables to the appropriate path. Go to the samples directory and build the OpenCL samples, using the Microsoft Visual Studio. Installing OpenCL on a Windows system with an NVIDIA graphics card - Uninstall any previous versions of the CUDA installation. - CUDA 4.2 or above release toolkit requires version R295, R300, or newer of the Windows Vista or Windows XP NVIDIA display driver. - Make sure you install the display driver and then proceed to the installation. - Install the Version 4.2 release of the NVIDIA CUDA toolkit cudatoolkit_4.2_Win_[32|64].exe. - Install the Version 4.2 release of the NVIDIA GPU computing SDK by running gpucomputingsdk_4.2_Win_[32|64].exe. Verify the installation by compiling and running some sample codes. Apple OSX Apple also provides an OpenCL implementation. You will need XCode developer tool to be installed. Xcode is a complete tool set for building OSX and iOS applications. For more information on building OpenCL application on OSX visit at the following link: Multiple installations As we have stated earlier, there can be multiple installations of OpenCL in a system. This is possible in OpenCL standard, because all OpenCL applications are linked using a common library called the OpenCL ICD library. Each OpenCL vendor, ships this library and the corresponding OpenCL.dll or libOpenCL.so library in its SDK. This library contains the mechanism to select the appropriate vendor-specific runtimes during runtime. The application developer makes this selection. Let's explain this with an example installation of an AMD and Intel OpenCL SDK. In the following screenshot of the Windows Registry Editor you can see two runtime DLLs. It is one of these libraries which is loaded by the OpenCL.dll library, based on the application developers selection. The following shows the Regedit entry with AMD and Intel OpenCL installations: Registry Editor screenshot, showing multiple installations During runtime, the OpenCL.dll library will read the registry details specific to HKEY_LOCAL_MACHINE\SOFTWARE\Khronos (or libOpenCL.so in Linux, will read the value of the vendor-specific library in the ICD file in folder /etc/OpenCL/vendors/*.icd), loads the appropriate library, and assigns the function pointers to the loaded library. An application developer can consider OpenCL.dll or libOpenCL.so as the wrapper around different OpenCL vendor libraries. This makes the application developers life easy and he can link it with OpenCL.lib or libOpenCL.so during link time, and distribute it with his application. This allows the application developer to ship his code for different OpenCL vendors/implementations easily. Implement the SAXPY routine in OpenCL SAXPY can be called the "Hello World" of OpenCL. In the simplest terms, the first OpenCL sample shall compute A = alpha*B + C, where alpha is a constant and A, B, and C are vectors of an arbitrary size n. In linear algebra terms, this operation is called SAXPY (Single precision real Alpha X plus Y). You might have understood by now, that each multiplication and addition operation is independent of the other. So this is a data parallel problem. A simple C program would look something like the following code: void saxpy(int n, float a, float *b, float *c) { for (int i = 0; i < n; ++i) y[i] = a*x[i] + y[i]; } OpenCL code An OpenCL code consists of the host code and the device code. The OpenCL kernel code is highlighted in the following code. This is the code which is compiled at run time and runs on the selected device. The following sample code computes A = alpha*B + C, where A, B, and C are vectors (arrays) of size given by the VECTOR_SIZE variable: #include <stdio.h> #include <stdlib.h> #ifdef __APPLE__ #include <OpenCL/cl.h> #else #include <CL/cl.h> #endif #define VECTOR_SIZE 1024 //OpenCL kernel which is run for every work item created. const char *saxpy_kernel = "__kernel \n" "void saxpy_kernel(float alpha, \n" " __global float *A, \n" " __global float *B, \n" " __global float *C) \n" "{ \n" " //Get the index of the work-item \n" " int index = get_global_id(0); \n" " C[index] = alpha* A[index] + B[index]; \n" "} \n"; int main(void) { int i; // Allocate space for vectors A, B and C float alpha = 2.0; float *A = (float*)malloc(sizeof(float)*VECTOR_SIZE); float *B = (float*)malloc(sizeof(float)*VECTOR_SIZE); float *C = (float*)malloc(sizeof(float)*VECTOR_SIZE); for(i = 0; i < VECTOR_SIZE; i++) { A[i] = i; B[i] = VECTOR_SIZE - i; C[i] = 0; } // Get platform and device information cl_platform_id * platforms = NULL; cl_uint num_platforms; //Set up the Platform cl_int clStatus = clGetPlatformIDs(0, NULL, &num_platforms); platforms = (cl_platform_id *) malloc(sizeof(cl_platform_id)*num_platforms); clStatus = clGetPlatformIDs(num_platforms, platforms, NULL); //Get the devices list and choose the device you want to run on cl_device_id *device_list = NULL; cl_uint num_devices; clStatus = clGetDeviceIDs( platforms[0], CL_DEVICE_TYPE_GPU, 0,NULL, &num_devices); device_list = (cl_device_id *) malloc(sizeof(cl_device_id)*num_devices); clStatus = clGetDeviceIDs( platforms[0],CL_DEVICE_TYPE_GPU, num_devices, device_list, NULL); // Create one OpenCL context for each device in the platform cl_context context; context = clCreateContext( NULL, num_devices, device_list, NULL, NULL, &clStatus); // Create a command queue cl_command_queue command_queue = clCreateCommandQueue (context, device_list[0], 0, &clStatus); // Create memory buffers on the device for each vector cl_mem A_clmem = clCreateBuffer(context, CL_MEM_READ_ONLY, VECTOR_SIZE * sizeof(float), NULL, &clStatus); cl_mem B_clmem = clCreateBuffer(context, CL_MEM_READ_ONLY, VECTOR_SIZE * sizeof(float), NULL, &clStatus); cl_mem C_clmem = clCreateBuffer(context, CL_MEM_WRITE_ONLY, VECTOR_SIZE * sizeof(float), NULL, &clStatus); // Copy the Buffer A and B to the device clStatus = clEnqueueWriteBuffer(command_queue, A_clmem, CL_TRUE, 0, VECTOR_SIZE * sizeof(float), A, 0, NULL, NULL); clStatus = clEnqueueWriteBuffer(command_queue, B_clmem, CL_TRUE, 0, VECTOR_SIZE * sizeof(float), B, 0, NULL, NULL); // Create a program from the kernel source cl_program program = clCreateProgramWithSource(context, 1, (const char **)&saxpy_kernel, NULL, &clStatus); // Build the program clStatus = clBuildProgram(program, 1, device_list, NULL, NULL, NULL); // Create the OpenCL kernel cl_kernel kernel = clCreateKernel(program, "saxpy_kernel", &clStatus); // Set the arguments of the kernel clStatus = clSetKernelArg(kernel, 0, sizeof(float), (void *)&alpha); clStatus = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&A_clmem); clStatus = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&B_clmem); clStatus = clSetKernelArg(kernel, 3, sizeof(cl_mem), (void *)&C_clmem); // Execute the OpenCL kernel on the list size_t global_size = VECTOR_SIZE; // Process the entire lists size_t local_size = 64; // Process one item at a time clStatus = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &global_size, &local_size, 0, NULL, NULL); // Read the cl memory C_clmem on device to the host variable C clStatus = clEnqueueReadBuffer(command_queue, C_clmem, CL_TRUE, 0, VECTOR_SIZE * sizeof(float), C, 0, NULL, NULL); // Clean up and wait for all the comands to complete. clStatus = clFlush(command_queue); clStatus = clFinish(command_queue); // Display the result to the screen for(i = 0; i < VECTOR_SIZE; i++) printf("%f * %f + %f = %f\n", alpha, A[i], B[i], C[i]); // Finally release all OpenCL allocated objects and host buffers. clStatus = clReleaseKernel(kernel); clStatus = clReleaseProgram(program); clStatus = clReleaseMemObject(A_clmem); clStatus = clReleaseMemObject(B_clmem); clStatus = clReleaseMemObject(C_clmem); clStatus = clReleaseCommandQueue(command_queue); clStatus = clReleaseContext(context); free(A); free(B); free(C); free(platforms); free(device_list); return 0; } The preceding code can be compiled on command prompt using the following command: Linux: gcc -I $(AMDAPPSDKROOT)/include -L $(AMDAPPSDKROOT)/lib -lOpenCL saxpy.cpp -o saxpy ./saxpy Windows: cl /c saxpy.cpp /I"%AMDAPPSDKROOT%\include" link /OUT:"saxpy.exe" "%AMDAPPSDKROOT%\lib\x86_64\OpenCL.lib" saxpy.obj saxpy.exe If everything is successful, then you will be able to see the result of SAXPY being printed in the terminal. For more ease in compiling the code for different OS platforms and different OpenCL vendors, we distribute the examples in this book with a CMAKE build script. Refer to the documentation of building the samples using the CMAKE build uitility. By now you should be able to install an OpenCL implementation which your hardware supports. You can now compile and run any OpenCL sample code, on any OpenCL compliant device. You also learned the various parallel programming models and solved a data parallel problem of SAXPY computation. Next you can try out some exercises on the existing code. Modify the existing program to take different matrix size inputs. Try to use a 2D matrix and perform a similar computation on the matrix. OpenCL program flow Every OpenCL code consists of the host-side code and the device code. The host code coordinates and queues the data transfer and kernel execution commands. The device code executes the kernel code in an array of threads called NDRange. An OpenCL C host code does the following steps: - Allocates memory for host buffers and initializes them. - Gets platform and device information. - Sets up the platform. - Gets the devices list and chooses the type of device you want to run on. - Creates an OpenCL context for the device. - Creates a command queue. - Creates memory buffers on the device for each vector. - Copies the Buffer A and B to the device. - Creates a program from the kernel source. - Builds the program and creates the OpenCL kernel. - Sets the arguments of the kernel. - Executes the OpenCL kernel on the device. - Reads back the memory from the device to the host buffer. This step is optional, you may want to keep the data resident in the device for further processing. - Cleans up and waits for all the commands to complete. - Finally releases all OpenCL allocated objects and host buffers. Run on a different device To make OpenCL run the kernel on the CPU, you can change the enum CL_DEVICE_TYPE_GPU to CL_DEVICE_TYPE_CPU in the call to clGetDeviceIDs. This shows how easy it is to make an OpenCL program run on different compute devices. The first sample source code is self-explanatory and each of the steps are commented. If you are running a multi GPU hardware system, then you will have to modify the code to use the appropriate device ID. The OpenCL specification is described in terms of the following four models: - Platform model: This model specifies the host and device specification. The host-side code coordinates the execution of the kernels in the devices. - Memory model: This model specifies the global, local, private, and constant memory. The OpenCL specification describes the hierarchy of memory architecture, regardless of the underlying hardware. - Execution model: This model describes the runtime snapshot of the host and device code. It defines the work-items and how the data maps onto the work-items. - Programming model: The OpenCL programming model supports data parallel and task parallel programming models. This also describes the task synchronization primitives. Finally to conclude this article, General Purpose GPU Computing (GPGPU or just GPU computing) is undeniably a hot topic in this decade. We've seen diminishing results in CPU speeds in the past decade compared to the decade before that. Each successive manufacturing node presents greater challenges than the preceding one. The shrink in process technology is nearing an end, and we cannot expect exponential improvements in serial program execution. Hence, adding more cores to the CPU is the way to go, and thereby parallel programming. A popular law called Gustafson's law suggests that computations involving large data sets can be efficiently parallelized. Summary In this article we got a brief overview of what an OpenCL program will look like. We started with a discussion of various parallel programming techniques, and their pros and cons. Different components of an OpenCL application were discussed. Various vendors providing OpenCL capable hardware were also discussed in this article. Finally, we ended the article with a discussion of a simple OpenCL example, SAXPY. Resources for Article: Further resources on this subject: - Using OpenCL [Article] - Improving Performance with Parallel Programming [Article] - Wrapping OpenCV [Article] About the Author : Koushik Bhattacharyya Koushik Bhattacharyya is working with Advanced Micro Devices, Inc. as Member Technical Staff and also worked as a software developer in NVIDIA®. He did his M.Tech in Computer Science (Gold Medalist) from Indian Statistical Institute, Kolkata, and M.Sc in pure mathematics from Burdwan University. With more than ten years of experience in software development using a number of languages and platforms, Koushik's present area of interest includes parallel programming and machine learning. Ravishekhar Banger Ravishekhar Banger calls himself a "Parallel Programming Dogsbody". Currently he is a specialist in OpenCL programming and works for library optimization using OpenCL. After graduation from SDMCET, Dharwad, in Electrical Engineering, he completed his Masters in Computer Technology from Indian Institute of Technology, Delhi. With more than eight years of industry experience, his present interest lies in General Purpose GPU programming models, parallel programming, and performance optimization for the GPU. Having worked for Samsung and Motorola, he is now a Member of Technical Staff at Advanced Micro Devices, Inc. One of his dreams is to cover most of the Himalayas by foot in various expeditions. You can reach him at ravibanger@gmail.com. Post new comment
https://www.packtpub.com/article/hello_opencl
CC-MAIN-2014-15
en
refinedweb
On 6/15/06, Niall Douglas <s_sourceforge at nedprod.com> wrote: > On 15 Jun 2006 at 8:55, Roman Yakovenko wrote: > > > If you want to insert a code in some specific place, then : > > > > module.body.adopt_creator( code_creators.custom_text_t( ... ), position ) > > ( By default the code creator will be appended to the end of the list ) > > No, this appends into TnFOX.main.cpp. I need it appended into > FXApp.pypp.cpp. If you examine file_writers/multiple_files.py, you > can see it writes license, precompiled header, global headers, global > namespace aliases, wrapper classes and the register function for the > class in question. Nowhere is it letting you write custom text > outside the bodies of these items. I suppose you could add a custom > form of wrapper code creator, but this seems inflexible - you should > be able to insert text anywhere in any output file. :-). This is one of my big problems - I don't know what interface to give to the user. It is not very difficult to implement what you are asking for. I am sure you have an idea, right ? As a temporal solution you can create custom file writer. Or may be to write small script that will add the code to the file. -- Roman Yakovenko C++ Python language binding
https://mail.python.org/pipermail/cplusplus-sig/2006-June/010485.html
CC-MAIN-2014-15
en
refinedweb
PAUSE(2) Linux Programmer's Manual PAUSE(2) NAME pause - wait for signal SYNOPSIS #include <unistd.h> int pause(void); DESCRIPTION The pause() library function causes the invoking process (or thread) to sleep until a signal is received that either terminates it or causes it to call a signal-catching function. RETURN VALUE The pause() function only returns when a signal was caught and the sig- nal). Linux 1995-08-31 PAUSE(2)
http://man.yolinux.com/cgi-bin/man2html?cgi_command=pause(2)
CC-MAIN-2014-15
en
refinedweb
this site is very useful for all those who really want to know the basic and advance features of Java. i m recommend everyone for this site who want to learn Java sir, i have observed here that outputs of botg using Thread or Runnable are not same in line5 of stdout Post your Comment Execution of Multiple Threads in Java Execution of Multiple Threads in Java Can anyone tell me how multiple threads get executed in java??I mean to say that after having called the start method,the run is also invoked, right??Now in my main method if I want Threads . Threads vs Processes Multiple processes / tasks Separate programs... Threads Basic Idea Execute more than one piece of code at the "same... time slicing. Rotates CPU among threads / processes. Gives creating multiple threads - Java Beginners creating multiple threads demonstrate a java program using multiple thread to create stack and perform both push and pop operation synchronously. Hi friend, Use the following code: import java.util.*; class Creating multiple Threads In this section you will learn how to create multiple thread in java. Thread... multiple thread to run concurrently. Each and every thread has the priority... to override run() method.Example : Code for creating multiple thread. public Synchronized Threads being corrupted by multiple threads by a keyword synchronized to synchronize them... methods, multiple threads can still access the class's non-synchronized methods... Synchronized Threads   Explain about threads:how to start program in threads? Explain about threads:how to start program in threads? import...; Learn Threads Thread is a path of execution of a program... more than one thread. Every program has at least one thread. Threads are used Synchronized Threads being corrupted by multiple threads by a keyword synchronized to synchronize them...-synchronized methods, multiple threads can still access the class's non... Synchronized Threads   threads threads what are threads? what is the use in progarmming interfaces,exceptions,threads with multiple threads is referred to as a multi-threaded process. In Java Programming... THE COMPLETE CONEPTS OF INTERFACES,EXCEPTIONS,THREADS Interface... class. In java, multiple inheritance is achieved by using the interface threads Threads Java - Threads in Java or multiprogramming is delivered through the running of multiple threads concurrently... Java - Threads in Java Thread is the feature of mostly languages including Java. Threads Running threads in servlet only once - JSP-Servlet Running threads in servlet only once Hi All, I am developing a project with multiple threads which will run to check database continuously. With those two separate threads I can check with database and do some other Creation of Multiple Threads Creation of Multiple Threads Like creation of a single thread, You can also create more... In this program, two threads are created along with the "main" thread Reading and writting multiple files Reading and writting multiple files how can i read and write say two different files at the same time using threads Java threads Java threads What are the two basic ways in which classes that can be run as threads may be defined Synchronization on threads Threads in realtime projects Threads in realtime projects Explain where we use threads in realtime projects with example Coding for life cycle in threads Coding for life cycle in threads program for life cycle in threads Life Cycle of Threads states implementing Multiple-Threads are: As we have seen different...; When you are programming with threads, understanding...;This method returns the number of active threads in a particular thread group and all java,j2eeranja g April 4, 2011 at 1:06 PM this site is very useful for all those who really want to know the basic and advance features of Java. i m recommend everyone for this site who want to learn Java creating multythreadsbhaskar July 7, 2011 at 2:21 PM sir, i have observed here that outputs of botg using Thread or Runnable are not same in line5 of stdout Post your Comment
http://roseindia.net/discussion/22972-Creation-of-Multiple-Threads.html
CC-MAIN-2014-15
en
refinedweb
2008/12/26 Andy Wingo <address@hidden>: > Happy St. Stephen's Day, hackers of the good hack! More eating the good pie than hacking the good hack right now! > I just landed a few patches on the vm branch that integrate backtrace > handling between the interpreter and the VM. `save-stack' saves the VM > stack and the interpreter stack, properly interleaved, as your > computation bounces back and forth between the interpreter and the VM. > > This also means that we now get VM backtraces from scm_backtrace() and > friends. That sounds great. > There are still some hiccups, but this was one of the last things I > needed to get done before merging VM to master. I need to finish > updating the documentation, fix the tracing infrastructure to be like > the traps infrastructure (it already is, mostly), and we'll be done. Nice. Regarding the merge to master, though, - I think that would imply that the VM is included in the next release series (1.10.x or 2.0.x); is that your intention? (I have no objection!) - if yes to that, even so it might help us to hold off on the merge for now, until we've finished reviewing everything else in branch_release-1-8..master, and moving _out_ anything that should be in the next release series. Now the last point makes it sound as though there's actually a plan here, and I'm afraid there isn't really. But I've recently been looking at the branch_release-1-8..master diffs, and reverting some that shouldn't have been there, and I think Ludovic may have been doing a little of that too. I don't think it will take very long for us to complete that, and hence to be confident that everything in master is ready for release. Would that be OK? > Finally, as part of the documentation work, I wrote up a history of > Guile. I'm including it in this mail for comment, but here's the link if > you want to read it that way: > > >;a=blob;f=doc/ref/history.texi;hb=vm > > I still need to fold in some feedback from Ludovic, so consider the > document a draft at this point. > > Cheers, > > Andy > > * begin history.texi: > > @c -*-texinfo-*- > @c This is part of the GNU Guile Reference Manual. > @c Copyright (C) 2008 > @c Free Software Foundation, Inc. > @c See the file guile.texi for copying conditions. > > @node History > @section A Brief History of Guile > >. > > @menu > * The Emacs Thesis:: > * Early Days:: > * A Scheme of Many Maintainers:: > * A Timeline of Selected Guile Releases:: > * Status:: > @end menu > > @node The Emacs Thesis > @subsection The Emacs Thesis > >. I'm afraid I don't understand this last paragraph! i.e. what you mean by "intension" or "built in the Emacs way". >. > > @node Early Days > @subsection Early Days > >). ((((Last full stop should be inside the parentheses.)))) > > acroymn, `. (I never heard that one before! (i.e. the last sentence) Interesting.) >. That's really nicely put! > @node A Scheme of Many Maintainers > @subsection A Scheme of Many Mantainers > > maintaned. > > Of course, a large part of the actual work on Guile has come from > other contributors too numerous to mention, but without whom the world > would be a poorer place. > > @node A Timeline of Selected Guile Releases > @subsection A Timeline of Selected Guile Releases > > @table @asis > @item guile-i --- 4 February 1995 > SCM, turned into a library. > > @item guile-ii --- 6 April 1995 > A low-level module system was added. Tcl/Tk support was added, > allowing extension of Scheme by Tcl or vice versa. POSIX support was > improved, and there was an experimental stab at Java integration. > > @item guile-iii --- 18 August 1995 > The C-like syntax, ctax, was improved, but mostly this release > featured a start at the task of breaking Guile into pieces. > > @item 1.0 --- 5 January 1997 > @code{#f} was distinguished from @code{'()}. Green threads were added. > Source-level debugging became more useful, and programmer's and user's > manuals were begun. The module system gained a high-level interface, > which is still used today in more or less the same form. > > @item 1.1 --- 16 May 1997 > @itemx 1.2 --- 24 June 1997 > Support for Tcl/Tk and ctax were split off as separate packages, and > have remained there since. Guile became more compatible with SCSH, and > more useful as a UNIX scripting language. Libguile can now be built as > a shared library, and third-party extensions written in C became > loadable via dynamic linking. > > @item 1.3.0 --- 19 October 1998 > Command-line editing became much more pleasant through the use of the > readline library. The initial support for internationalization via > multi-byte strings was removed, and has yet to be added back, though > UTF-8 hacks are common. Modules gained the ability to have custom > expanders, which is still used for syntax-case macros (and the preliminary Emacs Lisp support) >. Ports have > better support for file descriptors, and fluids were added. > > @item 1.3.2 --- 20 August 1999 > @itemx 1.3.4 --- 25 September 1999 > @itemx 1.4 --- 21 June 2000 > A long list of lispy features were added: hooks, Common Lisp's > @code{format}, optional and keyword procedure arguments, > @code{getopt-long}, sorting, random numbers, and many other fixes and > enhancements. Guile now has an interactive debugger, interactive help, > and gives better backtraces. > > @item 1.6 --- 6 September 2002 > Guile gained support for the R5RS standard, and added a number of SRFI > modules. The module system was expanded with programmatic support for > identifier selection and renaming. The GOOPS object system was merged > into Guile core. > > @item 1.8 --- 20 February 2006 > Guile's arbitrary-precision arithmetic switched to use the GMP > library, and added support for exact rationals. Green threads were > removed in favor of POSIX threads, providing true multiprocessing. > Gettext support was added, and Guile's C API was cleaned up and > orthogonalized in a massive way. > > @item 2.0 --- thus far, only unstable snapshots available > A virtual machine was added to Guile, along with the associated > compiler and toolchain. Support for locales was added. Running Guile > instances became controllable and debuggable from within Emacs, via > GDS. GDS was backported to 1.8.5. An SRFI-compatible interface to > multithreading was added, including thread cancellation. > @end table OK, that answers one of my questions above, and persuasively so. > @node Status > @subsection Status, or: Your Help Needed > > Guile has achieved much of what it set out to achieve, but there is > much remaining to do. > > There is still the old problem of bringing existing applications into > a more Emacs-like experience. Guile has had some successes in this > respect, but still most applications in the GNU system are without > Guile integration. > > Getting Guile to those applications takes an investment, the > ``hacktivation energy'' needed to wire Guile into a program that only > pays off once it is good enough to enable new kinds of behavior. This > would be a great way for new hackers to contribute: take an > application that you use and that you know well, think of something > that it can't yet do, and figure out a way to integrate Guile and > implement that task in Guile. > > With time, perhaps this exposure can reverse itself, whereby programs > can run under Guile instead of vice versa, eventually resulting in the > Emacsification of the entire GNU system. Indeed, this is the reason > for the naming of the many Guile modules that live in the @code{ice-9} > namespace, a nod to the fictional substance in Kurt Vonnegut's > novel, Cat's Cradle, capable of acting as a seed crystal to > crystallize the mass of software. > > Implicit to this whole discussion is the idea that dynamic languages > are somehow better than languages like C. While languages like C have > their place, Guile's take on this question is that yes, Scheme is more > expressive than C, and more fun to write. This realization carries an > imperative with it to write as much code in Scheme as possible rather > than in other languages. > > These days it is possible to write extensible applications almost > entirely from high-level languages, through byte-code and native > compilation, speed gains in the underlying hardware, and foreign call > interfaces in the high-level language. Smalltalk systems are like > this, as are Common Lisp-based systems. While there already are a > number of pure-Guile applications out there, users still need to drop > down to C for some tasks: interfacing to system libraries that don't > have prebuilt Guile interfaces, and for some tasks requiring high > performance. > > The addition of the virtual machine in Guile 2.0, together with the > compiler infrastructure, should go a long way to addressing the speed > issues. But there is much optimization to be done. Interested > contributors will find lots of delightful low-hanging fruit, from > simple profile-driven optimization to hacking a just-in-time compiler > from VM bytecode to native code. > > Still, even with an all-Guile application, sometimes you want to > provide an opportunity for users to extend your program from a > language with a syntax that is closer to C, or to Python. Another > interesting idea to consider is compiling e.g. Python to Guile. It's > not that far-fetched of an idea: see for example IronPython or JRuby. > > And then there's Emacs itself. Though there is a somewhat-working > Emacs Lisp translator for Guile, it cannot yet execute all of Emacs > Lisp. A serious integration of Guile with Emacs would replace the > Elisp virtual machine with Guile, and provide the necessary C shims so > that Guile could emulate Emacs' C API. This would give lots of > exciting things to Emacs: native threads, a real object system, more > sophisticated types, cleaner syntax, and access to all of the Guile > extensions. > > Finally, there is another axis of crystallization, the axis between > different Scheme implementations. Guile does not yet support the > latest Scheme standard, R6RS, and should do so. Like all standards, > R6RS is imperfect, but supporting it will allow more code to run on > Guile without modification, and will allow Guile hackers to produce > code compatible with other schemes. Help in this regard would be much > appreciated. Fantastic. I completely take back my initial concerns about this history. Thank you very much for writing it! Regarding the detail: everything that you have written is consistent with the history that I know of, but in several places is more detailed than I knew. So I can't verify all of those details, but I have no reason to suspect that any of them are wrong. Regards, Neil
http://lists.gnu.org/archive/html/guile-devel/2008-12/msg00052.html
CC-MAIN-2014-15
en
refinedweb
I think I am getting a namespace collition between Data.ByteString.Lazy.Char8.ByteString and Data.ByteString.Lazy.Internal.ByteString .... here is the error message .... Couldn't match expected type `B.ByteString' against inferred type `bytestring-0.9.0.1:Data.ByteString.Lazy.Internal.ByteString' On Tue, Dec 2, 2008 at 8:18 PM, Galchin, Vasili <vigalchin at gmail.com> wrote: > I am getting a collision with "Internal" .... sigh. > > > vasili > > > On Tue, Dec 2, 2008 at 5:59 PM, Duncan Coutts <duncan.coutts at worc.ox.ac.uk > > wrote: > >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/haskell-cafe/2008-December/051376.html
CC-MAIN-2014-15
en
refinedweb
#include <hallo.h> Philip Blundell wrote on Sat Mar 02, 2002 um 10:54:01AM: > > This fill fix the latter problem for sure. I don't know if we > > will still need to call route, though. > > No, the kernel should take care of that. On Ethernet devices, maybe, but for Point-to-Point connection we need exact settings of the peer host. Gruss/Regards, Eduard. -- If Bill Gates had a dime for every time a Windows box crashed... ...Oh, wait a minute, he already does.
https://lists.debian.org/debian-boot/2002/03/msg00096.html
CC-MAIN-2014-15
en
refinedweb
Delivering. These applications operate in a logically trusted environment and ensure that the information is secure while in flight and at rest. Enterprise class applications require granular partitioning of programming logic that can originate from multiple web domains accessing services and resources hosted on multiple networks and domains. Silverlight being the new RIA platform from Microsoft, there is very little material out on the Internet that focuses on the above enterprise aspects. So, I will be writing a few blog posts that address the enterprise needs. Most Silverlight samples I found on Internet are deployed as a monolithic package that can be few megabytes in size for non trivial applications and can take long time to download on slow networks. In the initial post, I will focus on some fundamentals and subsequently will write on application partitioning through multiple packages deployed on the same web server as well as cross-domain scenarios. In order to reduce the Silverlight plug-in size, it is packaged into core runtime and the SDK libraries. The core runtime is installed as a part of the plug-in installation while the SDK libraries will be downloaded as a part of the application package (with extension .XAP). The number of the SDK DLLs contained inside the application XAP will be based on the dependency of the application code on the SDK. For example, if the application does not program in DLR (Dynamic Language Runtime) languages to Iron Ruby, the Ruby language DLL will never be downloaded to the client. In light of this, it is absolutely important to make sure to trim the DLL references in the Silverlight project so that the package will not bloat unnecessarily. Here is a simple dependency diagram of a Silverlight application contained inside InventoryManager.xap package: The core runtime contains the following managed assemblies (as of beta1): The SDK libraries are optional and will be included in the package (XAP) based on the usage. The following is the list of SDK libraries as of beta1: If the Silverlight application does not program in Iron Ruby, the IronRuby.dll and IronRuby.Libraries.dll will not be included in the package. Likewise WaterMarkedTextBox is a part of the System.Windows.Controls.Extended.dll and should be taken out of the project references if none of the controls contained in this assembly are needed. Silverlight application will be packaged into a compressed archive with an extension .XAP. The XAP file is a standard ZIP file and the contents can be viewed by renaming it to .ZIP and opening it in Windows file explorer. Following is are the contents of the InventoryManager.xap (or InventoryManager.zip after renaming): The AppManifest.xaml will contain a list of included assemblies in the package. This will help enumerate this list using AssemblyPart and Deployment classes. Following is the sample code to load an assembly from a package stream: The XAP archive can be built by hand using a program called Chiron.exe that is installed as a part of the SDK in the following directory on the developer workstation: C:\Program Files\Microsoft SDKs\Silverlight\v2.0\Tools\Chiron\Chiron.exe. The Syntax for building XAP using Chiron: Chiron.exe /xap:InventoryManager.xap /directory:InventoryManager Please keep in mind that Chiron does not compile XAML; it just packages the content into a .XAP file. Chiron will be useful if you need to combine more than one Silverlight IDE project outputs into a single XAP file. However, if you don't need to combine multiple projects, you can be content with the IDE build process. In beta1, IDE is the only way to build the Silverlight project; the automated team builds will be supported in subsequent beta releases. Here are a few simple steps to deploy the Silverlight package to a target web server: If the target web server is not IIS, the steps pretty much remain same except the format of the hosting web page. The web page may contain HTML or other platform specific web markup. Having covered some fundamentals, we will look at our first main topic of application partitioning in the next post. We will compose UI from multiple XAPs with the help of PackageUtil (shown above) and SLPackage shown below: //code not meant for production use public class SLPackage { private Uri _packageUri; public class PackageEventArgs:EventArgs { private Stream _packageStream; private string _packageUri; public PackageEventArgs(Stream packageStream, string packageUri) { this._packageStream = packageStream; this._packageUri = packageUri; } public Stream PackageStream { get { return _packageStream; } } public String PackageUri { get { return _packageUri; } } } public delegate void PackageEventHandler(object sender, PackageEventArgs e); public event PackageEventHandler PackageDownloaded; public SLPackage(Uri uri) _packageUri = uri; public void LoadPackage() WebClient webClient = new WebClient(); webClient.OpenReadCompleted += new OpenReadCompletedEventHandler(webClient_OpenReadCompleted); webClient.OpenReadAsync(_packageUri); private void webClient_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e) PackageEventArgs pe = new PackageEventArgs(e.Result,this._packageUri.OriginalString); PackageDownloaded(this, pe); } PackageUtil and SLPackage will hide the XAP downloading from the application code. We will add more code to the above two classes and use reflection to create instances of UserControl, load the controls into the visual tree and transfer state information to complete the composition. Cheers, PingBack from Great blog post on internals and best practices for those of you working on Silverlight 2.0 beta 1 Hanu Hanu Kommalapati, one of the Architects on our team, has written an excellent blog post about using Silverlight Excellent Post from Hanu Kommalapati on Silverlight Deployment Good article thanks ! Will not System.Windows.Controls.dll be part of the runtime ? Please see here: It seems a waste of bandwidth to have to download it as it is pretty common to use controls. eye clip art [URL=]eye clip art[/URL] [url=]eye clip art[/url] [url][/url] <a href=''>acdsee 6.0 crack</a> <a href="">acdsee 6.0 crack</a> [link=]acdsee 6.0 crack[/link] la tortura video clip [URL=]la tortura video clip[/URL] [url=]la tortura video clip[/url] [url][/url]
http://blogs.msdn.com/b/hanuk/archive/2008/05/11/silverlight-for-the-enterprises-fundamentals.aspx
CC-MAIN-2014-15
en
refinedweb
Hi Bedartha and others, I have matplotlib 1.1.0 running fine on Mac OS X Lion with TkAgg, Qt4Agg, MacOSX and mplh5canvas ;-) backends in good working condition. My humble opinion is that this works because I do not try to replace System Python with my own version. Given that Lion ships with Python 2.7.1, I do not feel the need to upgrade it to Python.org's 2.7.2 and potentially endure these kinds of installation woes. My current installation instructions for NumPy / SciPy / matplotlib / IPython on Lion ("current" because this is quite a moving target!) can be condensed as follows: - use homebrew as far as possible (but don't use it to install Python, doh!) - brew install pkg-config gfortran zeromq pyqt (follow homebrew instructions for PYTHONPATH settings for pyqt dependencies) - use the System NumPy 1.5.1 - it's good enough for my purposes - build git version of SciPy using gcc-4.2 - unfortunately still a (slight) pain until next stable release - sudo easy_install readline nose pyzmq Pygments ipython matplotlib The detailed instructions are available upon request… Once you have pkg-config installed via homebrew or otherwise, you can literally just easy_install matplotlib! (Thanks to JDH for recently updating the PyPI repository) Regards, Ludwig On Thursday 10 November 2011 at 5:44 AM, matplotlib-users-request@... wrote: > Message: 2 > Date: Wed, 09 Nov 2011 11:22:42 -0800 > From: "Russell E. Owen" <rowen@... (mailto:rowen@...)> > Subject: Re: [Matplotlib-users] Matplotlib "show()" error Mac OS X > Lion > To: matplotlib-users@... (mailto:matplotlib-users@...) > Message-ID: <rowen-75CFDA.11224209112011@... (mailto:rowen-75CFDA.11224209112011@...)> > > In article <629E8F74-C832-4500-9768-8AF5FB39D633@... (mailto:629E8F74-C832-4500-9768-8AF5FB39D633@...)>, > Bedartha Goswami <goswami@... (mailto:goswami@...)> > wrote: > > > Hi, > > > > I have recently installed Python 32/64bit from Python.org () and then I > > proceeded to install bumpy, scipy, matplotlib and igraph on it. But the > > Matplotlib does not show the plots even if it opens a Figure window. Here is > > a summary of what I had done in my installation: > > ----- > > I first did a "clean install" by following the instructions at (with an idea > > to reinstall Matplotlib and see if the rror repeats): > > > > ----- > > So now my python does not have matplotlib: > > > > > > Type "help", "copyright", "credits" or "license" for more information. > > > > > import matplotlib > > > > > > > > > > > Traceback (most recent call last): > > File "<stdin>", line 1, in <module> > > ImportError: No module named matplotlib > > > > > > > > > > > > > > > > Bedarthas-MacBook-Air:Desktop bedartha$ > > ----- > > Then I downloaded (again) the DMG file at: > > > > atplotlib-1.1.0-py2.7-python.org-macosx10.3.dmg/download > > ----- > > and installed Matplotlib (which seems to go through fine). But after that: > > > > > > > > > I suspect you are trying to install matplotlib on the 64-bit Python > instead of the 32-bit python for which it was built > > I say this because 32-bit python is built using GCC 4.0.1. > > There is no matplotlib binary for 64-bit Python yet because I've not > figured out how to build one successfully -- I get horrible conflicts > with Tcl/Tk. > > -- Russell Hi Russell, The System Python 2.7.1 on 10.7 is also 64-bit (I checked the size of sys.maxint to confirm this). I have built matplotlib 1.1.0 on this with the standard Lion LLVM compiler (i686-apple-darwin11-llvm-gcc-4.2) and TkAgg works fine. Once you have pkg-config on your system (I use homebrew for this), matplotlib can be built without any further ado - no compiler bypasses, no files to edit. It can even be easy_installed directly from PyPI. Just thinking out loud - would it make sense to base the dmg installer on System Python instead? My feeling is that the *average* user is best served in this way. There is very little difference between System Python 2.7.1 and Python.org Python 2.7.2, except that the former does not have compilation issues. Also, the System NumPy 1.5.1 is more than adequate for the average user and does not need to be replaced too. In this case, the average user can literally download the dmg and install it on a vanilla Lion system without any other dependencies. To me, this would represent the best default packaging of matplotlib on the Mac, and I would consider anything else a custom installation. I've been using matplotlib and friends for nearly four years now on System Python (since Leopard) as a non-average user :-), and I've never felt the need to use a different Python. It certainly simplifies installation... Regards, Ludwig Hi, >> As a test, try to set your backend to either 'cocoaagg' or 'macosx' like so: import matplotlib as mpl mpl.use('cocoaagg') There have been issues with TkAgg on macs. I have personally not had any success with it (even with ActiveState's Tcl). >> >> >: 'use_2to3' > warnings.warn(msg) > error: Could not find required distribution pyobjc-core >> I am sorry for this but I easy_install-ed pyobjc-core, and then easy_install-ed PyObjC. Now the pyplot.show() command works. My matplotlib is now up and working! Thank you, (esp ben.root) Bedartha
http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/thread/74FBF93C1EBA42EAB3D0FC86666DDE05@gmail.com/
CC-MAIN-2014-15
en
refinedweb
Guide to Java 8 groupingBy Collector Last modified: April 10, 2022 1. Introduction In this tutorial, we'll see how the groupingBy collector works using various examples. For us to understand the material covered in this tutorial, we'll need a basic knowledge of Java 8 features. We can have a look at the intro to Java 8 Streams and the guide to Java 8's Collectors for these basics. Further reading: Collect a Java Stream to an Immutable Collection Java 8 Collectors toMap instance. The overloaded methods of groupingBy are: First, with a classification function as the method parameter: static <T,K> Collector<T,?,Map<K,List<T>>> groupingBy(Function<? super T,? extends K> classifier) Secondly, with a classification function and a second collector as method parameters: static <T,K,A,D> Collector<T,?,Map<K,D>> groupingBy(Function<? super T,? extends K> classifier, Collector<? super T,A,D> downstream) Finally, with a classification function, a supplier method (that provides the Map implementation which contains; } Next, the BlogPostType: enum BlogPostType { NEWS, REVIEW, GUIDE } Then. We use the value returned by the function as a key to the map that we get from the groupingBy collector. To group the blog posts in the blog post list by their type: Map<BlogPostType, List<BlogPost>> postsPerType = posts.stream() .collect(groupingBy(BlogPost::getType)); 2.3. groupingBy with a Complex Map Key Type The classification function is not limited to returning only a scalar or String value. The key of the resulting map could be any object as long as we make sure that we implement the necessary equals and hashcode methods. To group using two fields as keys, we can use the Pair class provided in the javafx.util or org.apache.commons.lang3.tuple packages. For example to group the blog posts in the list, by the type and author combined in an Apache Commons Pair instance: Map<Pair<BlogPostType, String>, List<BlogPost>> postsPerTypeAndAuthor = posts.stream() .collect(groupingBy(post -> new ImmutablePair<>(post.getType(), post.getAuthor()))); Similarly, we can use the Tuple class defined before, this class can be easily generalized to include more fields as needed. The previous example using a Tuple instance will be: Map<Tuple, List<BlogPost>> postsPerTypeAndAuthor = posts.stream() .collect(groupingBy(post -> new Tuple(post.getType(), post.getAuthor()))); Java 16 has introduced the concept of a record as a new form of generating immutable Java classes. The record feature provides us with a simpler, clearer, and safer way to do groupingBy than the Tuple. For example, we have defined a record instance in the BlogPost: public class BlogPost { private String title; private String author; private BlogPostType type; private int likes; record AuthPostTypesLikes(String author, BlogPostType type, int likes) {}; // constructor, getters/setters } Now it's very simple to group the BlotPost in the list by the type, author, and likes using the record instance: Map<BlogPost.AuthPostTypesLikes, List<BlogPost>> postsPerTypeAndAuthor = posts.stream() .collect(groupingBy(post -> new BlogPost.AuthPostTypesLikes(post.getAuthor(), post.getType(), post.getLikes()))); 2.4. Modifying the Returned Map Value Type The second overload of groupingBy takes an additional second collector (downstream collector) that is applied to the results of the first collector. When we specify a classification function,. Grouping by Multiple Fields A different application of the downstream collector is to do a secondary groupingBy. For instance, they are applied could be empty. This is why the value type in the map is Optional<BlogPost>. 2.9. Getting a Summary for an Attribute of Grouped Results The Collectors API offers a summarizing collector that we can. Aggregating Multiple Attributes of a Grouped Result In the previous sections we've seen how to aggregate one field at a time. There are some techniques that we can follow to do aggregations over multiple fields. The first approach is to use Collectors::collectingAndThen for the downstream collector of groupingBy. For the first parameter of collectingAndThen we collect the stream into a list, using Collectors::toList. The second parameter applies the finishing transformation, we can use it with any of the Collectors' class methods that support aggregations to get our desired results. For example, let's group by author and for each one we count the number of titles, list the titles, and provide a summary statistics of the likes. To accomplish this, we start by adding a new record to the BlogPost: public class BlogPost { // ... record PostCountTitlesLikesStats(long postCount, String titles, IntSummaryStatistics likesStats){}; // ... } The implementation of groupingBy and collectingAndThen will be: Map<String, BlogPost.PostcountTitlesLikesStats> postsPerAuthor = posts.stream() .collect(groupingBy(BlogPost::getAuthor, collectingAndThen(toList(), list -> { long count = list.stream() .map(BlogPost::getTitle) .collect(counting()); String titles = list.stream() .map(BlogPost::getTitle) .collect(joining(" : ")); IntSummaryStatistics summary = list.stream() .collect(summarizingInt(BlogPost::getLikes)); return new BlogPost.PostcountTitlesLikesStats(count, titles, summary); }))); In the first parameter of collectingAndThen we get a list of BlogPost. We use it in the finishing transformation as an input to the lambda function to calculate the values to generate PostCountTitlesLikesStats. To get the information for a given author is as simple as: BlogPost.PostcountTitlesLikesStats result = postsPerAuthor.get("Author 1"); assertThat(result.postCount()).isEqualTo(3L); assertThat(result.titles()).isEqualTo("News item 1 : Programming guide : Tech review 2"); assertThat(result.likesStats().getMax()).isEqualTo(20); assertThat(result.likesStats().getMin()).isEqualTo(15); assertThat(result.likesStats().getAverage()).isEqualTo(16.666d, offset(0.001d)); We can also do more sophisticated aggregations if we use Collectors::toMap to collect and aggregate the elements of the stream. Let's consider a simple example where we want to group the BlogPost elements by author and concatenate the titles with an upper bounded sum of like scores. First, we create the record that is going to encapsulate our aggregated result: public class BlogPost { // ... record TitlesBoundedSumOfLikes(String titles, int boundedSumOfLikes) {}; // ... } Then we group and accumulate the stream in the following manner: int maxValLikes = 17; Map<String, BlogPost.TitlesBoundedSumOfLikes> postsPerAuthor = posts.stream() .collect(toMap(BlogPost::getAuthor, post -> { int likes = (post.getLikes() > maxValLikes) ? maxValLikes : post.getLikes(); return new BlogPost.TitlesBoundedSumOfLikes(post.getTitle(), likes); }, (u1, u2) -> { int likes = (u2.boundedSumOfLikes() > maxValLikes) ? maxValLikes : u2.boundedSumOfLikes(); return new BlogPost.TitlesBoundedSumOfLikes(u1.titles().toUpperCase() + " : " + u2.titles().toUpperCase(), u1.boundedSumOfLikes() + likes); })); The first parameter of toMap groups the keys applying BlogPost::getAuthor. The second parameter transforms the values of the map using the lambda function to convert each BlogPost into a TitlesBoundedSumOfLikes record. The third parameter of toMap deals with duplicate elements for a given key and here we use another lambda function to concatenate the titles and sum the likes with a max allowed value specified in maxValLikes. 2.11. Mapping Grouped Results to a Different Type We can achieve more complex aggregations passing an EnumMap supplier function to the groupingBy method: EnumMap<BlogPostType, List<BlogPost>> postsPerType = posts.stream() .collect(groupingBy(BlogPost::getType, () -> new EnumMap<>(BlogPostType.class), toList())); 3. Concurrent groupingBy Collector Similar to groupingBy introduced two new collectors that work well with groupingBy; more information about them can be found here. 5. Conclusion In this article, we explored the usage of the groupingBy collector offered by the Java 8 Collectors API. We learned how groupingBy can be used to classify a stream of elements based on one of their attributes, and how the results of this classification can be further collected, mutated, and reduced to final containers. The complete implementation of the examples in this article can be found in the GitHub project.
https://www.baeldung.com/java-groupingby-collector
CC-MAIN-2022-27
en
refinedweb
Back to: Design Patterns in C# With Real-Time Examples Using Both Generic and Non-Generic Repository Pattern in C# In this article, I am going to discuss how to implement both generic and non-generic repository patterns in ASP.NET MVC applications using Entity Framework. In most real-time applications, the generic repository contains the methods which are common for all the entities. But if you want some specific operation for some specific repository. Then you need to create a specific repository with the required operations. Here, in this article, I will show you how to implement both generic and specific repositories for an entity. As we already discussed the repository pattern is used to create an abstraction layer between the data access layer and business logic layer to perform the CRUD operations against the underlying database. We also discussed that the repository design pattern can be implemented in the following two ways. So, I strongly recommended you read the following two articles before proceeding to this article. Generic Repository Pattern The generic repository pattern is used to define common database operations such as Create, Retrieve, Update, Delete, etc. for all the database entities in a single class. Non-Generic Repository Pattern (Specific Repository) The non-generic repository pattern is used to define all database operations related to a specific entity within a separate class. For example, if you have two entities let’s say, Employee and Customer, then each entity will have its own implementation repository. So, before implementing both generic and specific repositories let us first understand the implementation guidelines. That means when to use Generic and when to use Specific and when to use both generic and specific in an application. Repository Pattern Implementation Guidelines If you will use one of the above implementations, then with the generic repository, you cannot use specific operations for an entity and in the case of non-generic implementation, you have to write code for common CRUD operations for each entity. So better way is, just to create a generic repository for commonly used CRUD operations, and for specific operations create a non-generic repository and inherit from the generic repository. The below diagram explains the above things. Let’s understand this with one example We are going to work with the same example that we started in the Non-Generic Repository Design pattern, and Continue in Generic Repository Design Pattern articles. So please read these two articles before proceeding to this article. Modify the IGenericRepository.cs file as shown below. namespace RepositoryUsingEFinMVC.GenericRepository { public interface IGenericRepository<T> where T : class { IEnumerable<T> GetAll(); T GetById(object id); void Insert(T obj); void Update(T obj); void Delete(object id); void Save(); } } Modify the GenericRepository.cs file as shown below namespace RepositoryUsingEFinMVC.GenericRepository { public class GenericRepository<T> : IGenericRepository<T> where T : class { public EmployeeDBContext _context = null; public DbSet<T> table = null; public GenericRepository() { this._context = new EmployeeDBContext(); table = _context.Set<T>(); } public GenericRepository(EmployeeDBContext _context) { this._context = _context; table = _context.Set<T>(); } public IEnumerable<T> GetAll() { return table.ToList(); } public T GetById(object id) { return table.Find(id); } public void Insert(T obj) { table.Add(obj); } public void Update(T obj) { table.Attach(obj); _context.Entry(obj).State = EntityState.Modified; } public void Delete(object id) { T existing = table.Find(id); table.Remove(existing); } public void Save() { _context.SaveChanges(); } } } Note: The above code is the implementation of the Generic Repository where we implement the code for common CRUD operation for each entity. Now we need to provide a specific implementation for each entity. Let say we want two extra operations for the Employee entity such as to get Employees By gender and get employees by department. As these two operations are specific to the Employee entity there is no point to add these two operations in the Generic Repository. So we need to create a non-generic repository called EmployeeRepository which will also inherit from GenericRepository. Within this repository, we need to provide the two specific operations as shown below. Modify the IEmployeeRepository.cs file as shown below. using RepositoryUsingEFinMVC.DAL; using RepositoryUsingEFinMVC.GenericRepository; using System.Collections.Generic; namespace RepositoryUsingEFinMVC.Repository { public interface IEmployeeRepository : IGenericRepository<Employee> { IEnumerable<Employee> GetEmployeesByGender(string Gender); IEnumerable<Employee> GetEmployeesByDepartment(string Dept); } } Modify the EmployeeRepository.cs file as shown below namespace RepositoryUsingEFinMVC.Repository { public class EmployeeRepository : GenericRepository<Employee>, IEmployeeRepository { public IEnumerable<Employee> GetEmployeesByGender(string Gender) { return _context.Employees.Where(emp => emp.Gender == Gender).ToList(); } public IEnumerable<Employee> GetEmployeesByDepartment(string Dept) { return _context.Employees.Where(emp => emp.Dept == Dept).ToList(); } } } Now we need to use both these generic and non-generic repositories in Employee Controller. Modify the Employee Controller as shown below. using RepositoryUsingEFinMVC.Repository; using System.Web.Mvc; using RepositoryUsingEFinMVC.DAL; using RepositoryUsingEFinMVC.GenericRepository; namespace RepositoryUsingEFinMVC.Controllers { public class EmployeeController : Controller { private IGenericRepository<Employee> repository = null; private IEmployeeRepository employee_repository = null; public EmployeeController() { this.employee_repository = new EmployeeRepository(); this.repository = new GenericRepository<Employee>(); } public EmployeeController(EmployeeRepository repository) { this.employee_repository = repository; } public EmployeeController(IGenericRepository<Employee> repository) { this.repository = repository; } [HttpGet] public ActionResult Index() { //you can not access the below two mwthods using generic repository //var model = repository.GetEmployeesByDepartment("IT"); var model = employee_repository.GetEmployeesByGender("Male"); return View(model); } [HttpGet] public ActionResult AddEmployee() { return View(); } [HttpPost] public ActionResult AddEmployee(Employee model) { if (ModelState.IsValid) { repository.Insert(model); repository.Save(); return RedirectToAction("Index", "Employee"); } return View(); } [HttpGet] public ActionResult EditEmployee(int EmployeeId) { Employee model = repository.GetById(EmployeeId); return View(model); } [HttpPost] public ActionResult EditEmployee(Employee model) { if (ModelState.IsValid) { repository.Update(model); repository.Save(); return RedirectToAction("Index", "Employee"); } else { return View(model); } } [HttpGet] public ActionResult DeleteEmployee(int EmployeeId) { Employee model = repository.GetById(EmployeeId); return View(model); } [HttpPost] public ActionResult Delete(int EmployeeID) { repository.Delete(EmployeeID); repository.Save(); return RedirectToAction("Index", "Employee"); } } } Note: Using a specific repository we can access all the operations. But using the generic repository we can access only the operations which are defined in the generic repository. Now run the application and see everything is working as expected. In the next article, I am going to discuss how to use the Unit of Work using both generic and non-generic repositories in the ASP.NET MVC applications. Here, in this article, I try to explain how to implement both generic and non-generic repositories in ASP.NET MVC application using Entity Framework step by step with a simple example. 1 thought on “Using Both Generic and Non-Generic Repository Pattern in c#” Nice Idea
https://dotnettutorials.net/lesson/repository-pattern-implementation-guidelines-csharp/
CC-MAIN-2022-27
en
refinedweb
LinkedHashSet in Java is a class that is present in java.util package. It is the child class of HashSet and implements the set interface. Earlier we have discussed both HashSet and TreeSet in a detailed way and in this tutorial, we are going to share knowledge on LinkedHashSet. It is the same as HashSet and TreeSet but there are slight changes you may observe. You guys can get complete details - Java LinkedHashSet Class Declaration - Hierarchy of LinkedHashSet class - Features of LinkedHashSet Class - Difference between HashSet and LinkedHashSet - Constructors of Java LinkedHashSet Class - Creating LinkedHashSet from Other Collections - Methods of LinkedHashSet Class in Java - Insert Elements to LinkedHashSet Class Example - Java LinkedHashSet Example Java LinkedHashSet Class Declaration public class LinkedHashSet<E> extends HashSet<E> implements Set<E>, Cloneable, Serializable Type Parameters: E – the type of elements maintained by this set Implemented Interfaces: - Serializable - Cloneable, - Iterable<E> - Collection<E> - Set<E> Hierarchy of LinkedHashSet class Do Check: - LinkedHashMap in Java - SortedSet interface in Java with Example - TreeMap in Java with Example - ArrayList in Java with Example Features of LinkedHashSet Class - LinkedHashSet class contains unique elements. - It allows null value. - The underlying data structure for LinkedHashSet is both a hash table and a linked list. - In LinkedHashSet class, the insertion order is preserved. Difference between HashSet and LinkedHashSet LinkedHashSet is similar to HashSet except for the below-mentioned differences. 1. In HashSet insertion order is not preserved it means the elements will not be returned in the same order in which we are inserted, while in the case of LinkedHashSet insertion order must be preserved. 2. The underlying data structure for HashSet is a hash table while the underlying data structure for LinkedHashSet is both hash table and Linked List. 3. HashSet class came in Java 1.2 version while LinkedHashSet class came in Java 1.4 version. Constructors of Java LinkedHashSet Class There are 4 constructors are available in the Java HashSet class, which are described below: 1. LinkedHashSet(): This constructor creates a default linked hash set. 2. LinkedHashSet(int capacity): It is used to initialize the capacity of the linked hash set to the given capacity. 3. LinkedHashSet(int capacity, float loadFactor): This constructor is used to initialize the capacity of the linked hash set to the given capacity and the given load factor. 4. LinkedHashSet(Collection<? extends E> c): It is used to initializes the linked hash set by using the elements of collection c. Creating LinkedHashSet from Other Collections From the below sample program, you guys can easily learn how we can create a linked hash set containing all the elements of other java collections: Output: ArrayList: [2, 4] LinkedHashSet: [2, 4] Methods of LinkedHashSet Class in Java LinkedHashSet class in Java is the child class of HashSet class. It defines the same methods which are available in the HashSet class and it doesn’t add methods of its own. Insert Elements to LinkedHashSet Class Example import java.util.LinkedHashSet; class Main { public static void main(String[] args) { LinkedHashSet<Integer> evenNumber = new LinkedHashSet<>(); // Using add() method evenNumber.add(2); evenNumber.add(4); evenNumber.add(6); System.out.println("LinkedHashSet: " + evenNumber); LinkedHashSet<Integer> numbers = new LinkedHashSet<>(); // Using addAll() method numbers.addAll(evenNumber); numbers.add(5); System.out.println("New LinkedHashSet: " + numbers); } } Output: LinkedHashSet: [2, 4, 6] New LinkedHashSet: [2, 4, 6, 5] Java LinkedHashSet Example import java.util.*; class linkedHashSetExample { public static void main(String args[]) { //creating a hash set LinkedHashSet lhset = new LinkedHashSet(); //adding elements in linked hash set lhset.add("Amit"); lhset.add("Sumit"); lhset.add("Rohit"); lhset.add("Virat"); lhset.add("Vijay"); lhset.add("Ajay"); //adding duplicate values lhset.add("Rohit"); lhset.add("Sumit"); //find size of linked hash set System.out.println("Size of the linked hash set is: "+lhset.size()); //displaying original linked hash set System.out.println("The original linked hash set is: " +lhset); //removing element from linked hash set System.out.println("Removing(Rohit) from the linked hash set: " +lhset.remove("Rohit")); //checking specified element present or not System.out.println("(Sumit)present in the linked hash set or not: " +lhset.contains("Sumit")); //displaying updated linked hash set System.out.println("Finally, the updated linked hash set is: "+lhset); } } Output: Size of the linked hash set is: 6 The original linked hash set is: [Amit, Sumit, Rohit, Virat, Vijay, Ajay] Removing(Rohit) from the linked hash set: true (Sumit) present in the linked hash set or not: true Finally, the updated linked hash set is: [Amit, Sumit, Virat, Vijay, Ajay]
https://btechgeeks.com/linkedhashset-in-java-with-example/
CC-MAIN-2022-27
en
refinedweb
USB serial debugging While Particle Devices typically communicate using the Particle Cloud and the Internet, for debugging code and troubleshooting, it's often common to use a USB connection to your computer or laptop. It's also possible to update your user firmware and Device OS over USB. It's not possible for devices to communicate with the Internet over USB; they must use the network type they are designed for, such as Wi-Fi or cellular. Particle devices are a USB device, intended to connect to a computer. They cannot be used as a USB host, so you cannot plug a USB keyboard into a Photon or Argon, for example. You also cannot connect something like a USB RS232 adapter, though they are other, easier ways to add serial ports. You generally cannot connect your phone to a Particle device by USB. The normal method of communicating with a mobile app is over the Internet through the Particle cloud. Gen 3 Particle devices can connect by Bluetooth LE (BLE) to your phone. Gen 2 Wi-Fi devices (Photon, P1) have a limited ability to communicate directly over Wi-Fi for device setup. Some Android phones can plug directly into a Particle device by USB by using a USB OTG ("on the go") adapter, which allows your phone to behave mostly like a USB host, even though it's normally a USB device. Apple iOS (iPhone and iPad) devices do not support USB OTG. Using a terminal program" On Windows, open a Command Prompt window to use the particle serial monitor command. For Mac or Linux, open a Terminal window. One particularly useful command line option is the --follow option which will reconnect to the Particle device if it disconnects. This is helpful because if the device resets (including after a firmware update) USB serial is briefly disconnected. To stop reconnecting, press Ctrl-C. particle serial monitor --follow The Windows Command Prompt has an unusual method of copying to the clipboard: - Press Ctrl-M ("mark"). It's also available if you click on the icon in the upper left corner of the window in the Edit menu. - Drag a selection around the text you want to copy. - Press Enter to Copy to the clipboard (also in the Edit menu). Particle Workbench In Particle Workbench (VS Code), open the command palette (Command-Shift-P on the Mac, Ctrl-Shift-P on Windows and Linux) and select Particle: Serial Monitor. Windows. To close (kill) a screen session, press Ctrl-A then press k. Linux. To close (kill) a screen session, press Ctrl-A then press k. Web serial If you are using the Chrome web browser (version 89 or higher) on Windows, Mac, Linux, or Chromebook, you can use the web-based USB serial monitor. This is particularly useful if you have a Chromebook, or cannot install additional software on your computer. The button below will open a new tab, as you will probably want to be able to use other things like the Web IDE and documentation at the same time. User firmware Here's simple firmware you can flash to your device to test USB serial output: Log in manually (not using single sign-on) In depth #include "Particle.h" This line is only required for .cpp files, not for .ino files. Since it doesn't hurt and is sometimes required, it's easy to just always add it. SerialLogHandler logHandler; This line sets up the serial log handler. This is the the preferred way to output serial debug logs. You can learn more here. - Using the log handler you can adjust the verbosity from a single statement in your code, including setting the logging level per module at different levels. - Using the log handler you can redirect the logs between USB serial and UART serial with one line of code. - Other logging handlers allow you to store logs on SD cards, to a network service like syslog, or to a cloud-based service like Solarwinds Papertrail. Being able to switch from USB to UART serial is especially helpful if you are using sleep modes. Because USB serial disconnects on sleep, it can take several seconds for it to reconnect to your computer. By using UART serial (Serial1, for example) with a USB to TTL serial converter, the USB serial will stay connected to the adapter so you can begin logging immediately upon wake. SYSTEM_THREAD(ENABLED); This line enables system threaded mode. It allows your code to run before connected to the cloud, such as when blinking green or cyan. You can learn more here. const std::chrono::milliseconds logPeriod = 5s; unsigned long lastLog; int counter; These lines are part of the code that periodically prints output. The first line is how often to generate messages. In this case, 5 seconds ( 5s). You could change this to 10s (10 seconds), 500ms (500 milliseconds), or 1min (1 minute). void setup() { } There's nothing to set up here. Sometimes you'll see Serial.begin() here, but it's not necessary if you are using SerialLogHandler. void loop() { if (Network.listening()) { return; } This is not always necessary, but one thing to beware of: When using system thread mode, your code also runs when in listening mode (blinking dark blue). If you are continuously writing to the USB serial port from your code, you can prevent listening mode (blinking dark blue) from working properly. This will affect commands like particle identify. Avoiding output when in listening mode allows these commands to work properly. if (millis() - lastLog >= logPeriod.count()) { lastLog = millis(); This code handled executing the inner part of the if at a predefined interval set in logPeriod above. The default is every 5 seconds ( 5s) but you can change the code to other values. Log.info("counter=%d", ++counter); } } And finally, this outputs a message. If you are monitoring the serial output, you should see something like this: 0000010001 [app] INFO: counter=2 0000015001 [app] INFO: counter=3 0000020001 [app] INFO: counter=4 0000025001 [app] INFO: counter=5 The leftmost column is a timestamp in milliseconds. The second column is what module generated the message. You might also see [system] or [comm.protocol] here. The third column is the log level. What do you think would happen if you changed the code to be: Log.error("counter=%d", ++counter); Other logging methods can be found here. More examples The example above uses this code: Log.info("counter=%d", ++counter); There's actually a lot going on here. Log.infocan also be Log.error, Log.warnor Log.tracedepending on the severity of the message. "counter=%d"is a sprintfformatting string. ++counterare the variable arguments to sprintf. In this case, the global variable counteris incremented on every use. Log.trace is not printed by default, so that's good choice for more detailed debugging. See Customizing log levels, below, for more information. Here are some more examples of logging statements and their expected output: Log.info("staring test millis=%lu", millis()); // 0000068727 [app] INFO: staring test millis=68727 // To print an int as decimal, use %d Log.info("counter=%d", ++counter); // 0000068728 [app] INFO: counter=1 // To print an int as hexadecimal, use %x int value1 = 1234; Log.info("value1=%d value1=%x (hex)", value1, value1); // 0000068728 [app] INFO: value1=1234 value1=4d2 (hex) // To print a string, use %s const char *testStr1 = "testing 1, 2, 3"; Log.info("value1=%d testStr=%s", value1, testStr1); // 0000068728 [app] INFO: value1=1234 testStr=testing 1, 2, 3 // To print a long integer, use %ld, %lx, etc. long value2 = 123456789; Log.info("value2=%ld value2=%lx (hex)", value2, value2); // 0000068729 [app] INFO: value2=123456789 value2=75bcd15 (hex) // To print to a fixed number of places with leading zeros: Log.info("value2=%08lx (hex, 8 digits, with leading zeros)", value2); // 0000068729 [app] INFO: value2=075bcd15 (hex, 8 digits, with leading zeros) // To print an unsigned long integer (uint32_t), use %lu or %lx uint32_t value3 = 0xaabbccdd; Log.info("value3=%lu value3=%lx (hex)", value3, value3); // 0000068730 [app] INFO: value3=2864434397 value3=aabbccdd (hex) // To print a floating point number, use %f float value4 = 1234.5; Log.info("value4=%f", value4); // 0000068730 [app] INFO: value4=1234.500000 // To print a double floating point, use %lf double value5 = 1234.333; Log.info("value5=%lf", value5); // 0000068731 [app] INFO: value5=1234.333000 // To limit the number of decimal places: Log.info("value5=%.2lf (to two decimal places)", value5); // 0000068732 [app] INFO: value5=1234.33 (to two decimal places) Any time you are printing a String object from Log.info you must do it like this: String myString = "testing!"; Log.info("message: %s", myString.c_str()); // Print my local IP address (Photon, P1, and Argon): Log.info("ip address: %s", WiFi.localIP().toString().c_str()); - Note the use of %sfor a string - You must add the .c_str()at the end, otherwise it will print garbage characters! The reason is that with variable arguments, the compiler does not know you want the c-string version (null terminated, compatible with %s). You can also do it this way if you prefer: Log.info("ip address: %s", (const char *) WiFi.localIP().toString(); If you want to log a MAC address (hardware address, used with Wi-Fi, Ethernet, and BLE), you can print the normal hex format by using: uint8_t addr[6]; Ethernet.macAddress(addr); Log.info("mac: %02x-%02x-%02x-%02x-%02x-%02x", addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]); Sprintf-style formatting, including Log.info() etc. does not support 64-bit integers. It does not support %lld, %llu or Microsoft-style %I64d or %I64u. As a workaround you can use the Print64 firmware library in the community libraries. The source and instructions can be found in GitHub. This can happen if you want to print the event code for a System Event Handler which is type system_event_t which is 64-bits wide. You can learn more about sprintf here. These messages are limited to 200 characters and are truncated if longer. If you want to use write longer data, you can use Log.print(str) which takes a pointer to a null-terminated c-string. Note that the output does not include the timestamp, category, and level, so you may want to precede it with Log.info(), etc. but is not length-limited. You cannot use printf-style formatting with Log.print(). You can also print data in hexadecimal using Log.dump(ptr, len) to print a buffer in hex as specified by pointer and length. It also does not include the timestamp, category, and level. Two-way USB serial Often you will only use USB serial to output message from your program. In fact, the particle serial monitor is really only a monitor and you cannot type back to the device. Sometimes you will want to interact with your program, and it is possible to do two-way communication using programs like screen (Mac or Linux) or PuTTY and CoolTerm (Windows). You can also use the Arduino IDE as a two-way serial program. It will print messages received in the USB serial debug output like this: 0000190001 [app] INFO: counter=38 0000195001 [app] INFO: counter=39 0000195304 [app] INFO: received Testing! 0000200001 [app] INFO: counter=40 Customizing log levels In the example above we used this statement: SerialLogHandler logHandler; A common variation is to specify the level: SerialLogHandler logHandler(LOG_LEVEL_TRACE); Setting the level to LOG_LEVEL_TRACEincludes all Log.trace, Log.info, Log.warn, and Log.error. The default if you do not specify is LOG_LEVEL_INFO. The Tracker Edge firmware includes this: SerialLogHandler logHandler(115200, LOG_LEVEL_TRACE, { { "app.gps.nmea", LOG_LEVEL_INFO }, { "app.gps.ubx", LOG_LEVEL_INFO }, { "ncp.at", LOG_LEVEL_INFO }, { "net.ppp.client", LOG_LEVEL_INFO }, }); - The 115200 is optional, as the baud rate is ignored. It's there so you can change it to be a Serial1LogHandlerwith a 1-character change and have it work with hardware UART serial. - It sets the default log level to LOG_LEVEL_TRACE. - For the category "app.gps.nmea" (the GPS library), it sets the level to LOG_LEVEL_INFOas the logs are too verbose at LOG_LEVEL_TRACEunless you are debugging a GPS issue. - Same for the other categories, feel free to add more. You may also want to go the other way, and set the default to LOG_LEVEL_INFO and increase it for specific modules you want detailed debugging on. SerialLogHandler logHandler(115200, LOG_LEVEL_INFO, { { "app.mymodule", LOG_LEVEL_TRACE }, }); UART serial The USB serial debug is great, but there are a few cases where it's less convenient. One is when using sleep modes. When the Particle devices goes to sleep, USB is disconnected. When you wake up, it can take a while (a few seconds on Mac or Linux, as many as 8 seconds on Windows) to reconnect. One solution to this is to use UART serial, hardware serial, for example Serial1 on the TX and RX pins, if you are not already using it in your project. You'll need a converter: Converters There are a number of USB to TTL serial converters available. This one from Sparkfun works well with Particle devices; it's fully compatible and has a micro-USB connector like the Photon, Electron, Argon, and Boron. It works without additional drivers on Mac and Linux and there are drivers available for Windows. There are many other options, however: - Make sure your converter is a 3.3V TTL serial converter. - Absolutely not an RS232 converter - that will permanently damage your Particle device! FT232is a good search term. It's the name of FTDI chip in many of these converters. - There are other chipsets like the CH340 that also work fine. A 5V TTL converter will probably work Particle Device TX to converter RX, but never connect a 5V TX to an Argon or Boron RX pin! Gen 3 Particle devices are not 5V tolerant. Using a 3.3V serial converter is a safer option. Particle Debugger The Particle Debugger is also a 3.3V TTL serial converter! - The 10-pin ribbon cable does not connect TX and RX. - Connect TXon the Photon, Electron, Argon, Boron, etc. to RXon the Particle Debugger. - Optionally connect RXon the Particle device to TXon the Particle Debugger if you need two-way communication. - Make sure you match the baud rates. Otii Arc The Qoitech Otii Arc is a programmable power supply that is great for testing power consumption on Particle devices. An additional benefit is that is has an available TTL serial connection. You can connect the TX output of the Particle device to RX on the Arc, and it will correlate the time of the serial output with the power consumption! User firmware In your user firmware, you'll need to enable Serial1 output. For example: Serial1LogHandler uartLogHandler(115200); Make sure you match the baud rate! Another common setting is 9600: Serial1LogHandler uartLogHandler(9600); You can enable both USB and UART serial output at the same time: SerialLogHandler logHandler; Serial1LogHandler uartLogHandler(115200); You can also choose different log levels based on the destination logger: SerialLogHandler logHandler(LOG_LEVEL_INFO); Serial1LogHandler uartLogHandler(115200, LOG_LEVEL_TRACE); Learn more - Device OS Serial Reference - Device OS Logging Reference - More about serial including hardware UART ports
https://docs.particle.io/firmware/best-practices/usb-serial/
CC-MAIN-2022-27
en
refinedweb
_q_index()if an object isn't callable this is invoked _q_lookup()if an attribute isn't found this is invoked. Rather than add Python like syntax to HTML just let Python return HTML def footer [html] (): # Literals are appended to the output '<p>Page generated at ' # Expressions are evaluated time.strftime("%Y-%m-%d %H:%M", time.gmtime()) # Then back to literals '</p>'
https://halfcooked.com/presentations/sp-20050310/
CC-MAIN-2022-27
en
refinedweb
In this chapter we'll cover obtaining and installing Python on your system for Windows, Ubuntu Linux, and macOS. We'll also write our first basic Python code and become a acquainted with the essentials Python programming culture, such as the Zen of Python, while never forgetting the comical origins of the name of the language. There are two major versions of the Python language, Python 2 which is the widely deployed legacy language and Python 3 which is the present and future of the language. Much Python code will work without modification between the last version of Python 2 (which is Python 2.7 ()) and recent versions of Python 3, such as Python 3.5 (). However, there are some key differences between the major versions, and in a strict sense the languages are incompatible. We'll be using Python 3.5 for this book, but we'll point out key differences with Python 2 as we go. It's also very likely that, this being a book on Python fundamentals, everything we present will apply to future versions of Python 3, so don't be afraid to try those as they become available. Before we can start programming in Python we need to get hold of a Python environment. Python is a highly portable language and is available on all major operating systems. You will be able to work through this book on Windows, Mac or Linux, and the only major section where we diverge into platform specifics is coming right up — as we install Python 3. As we cover the three platforms, feel free to skip over the sections which aren’t relevant for you. The following are the steps to be performed for Windows platform: For Windows you need to visit the official Python website, and then head to the Downloadspage by clicking the link on the left. For Windows you should choose one of the MSI installers depending on whether you're running on a 32- or 64-bit platform. Download and run the installer. In the installer, decide whether you only want to install Python for yourself, or for all users of your machine. Choose a location for the Python distribution. The default will be in C:\Python35in the root of the C:drive. We don't recommended installing Python into Program Files because the virtualized file store used to isolate applications from each other in Windows Vista and later can interfere with easily installing third-party Python packages. On the Customize Pythonpage of the wizard we recommend keeping the defaults, which use less than 40 MB of space. In addition to installing the Python runtime and standard library, the installer will register various file types, such as *.pyfiles, with the Python interpreter. Once Python has been installed, you'll need to add Python to your system PATHenvironment variable. To do this, from the Control Panel choose System and Security, then System. Another way to get here easily is to hold down your Windows key and press the Break key on your keyboard. Using the task pane on the left choose Advanced System Settingsto open the Advancedtab of the System Propertiesdialog. Click Environment variablesto open the child dialog. If you have Administrator privileges you should be able to add the paths C:\Python35and C:\Python35\Scriptsto the semicolon separated list of entries associated with the PATHsystem variable. If not, you should be able to create, or append to, a PATHvariable specific to your user containing the same value. Now open a new console window — either Powershell or cmd will work fine — and verify that you can run python from the command line: > python Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> Welcome to Python! The triple arrow prompt shows you that Python is waiting for your input. At this point you might want to skip forward whilst we show how to install Python on Mac and Linux. For macOS you need to visit the official Python website at. Head to the Downloadpage by clicking the link on the left. On the Downloadpage, find the macOS installer matching your version of macOS and click the link to download it. A DMG Disk Image file downloads, which you open from your Downloads stack or from the Finder. In the Finder window that opens you will see the file Python.mpkgmultipackage installer file. Use the "secondary" click actionto open the context menu for that file. From that menu, select Open. On some versions of macOS you will now be told that the file is from an unidentified developer. Press the Openbutton on this dialog to continue with the installation. You are now in the Python installer program. Follow the directions, clicking through the wizard. There is no need to customize the install, and you should keep the standard settings. When it's available, click the Installbutton to install Python. You may be asked for your password to authorize the installation. Once the installation completes click Closeto close the installer. Now that Python 3 is installed, open a terminal window and verify that you can run Python 3 from the command line: > python Python 3.5.0 (default, Nov 3 2015, 13:17:02) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Welcome to Python! The triple arrow prompt shows that Python is waiting for your input. To install Python on Linux you will want to use your system's package manager. We'll show how to install Python on a recent version of Ubuntu, but the process is very similar on most other modern Linux distributions. On Ubuntu, first start the Ubuntu Software Center. This can usually be run by clicking on it's icon in the launcher. Alternatively, you can run it from the dashboard by searching on Ubuntu Software Center and clicking the selection. Once you're in the software center, enter the search term python 3.5 in the search bar in the upper right-hand corner and press return. One of the results you'll get will say Python (v3.5) with Python Interpreter (v3.5) in smaller type beneath it. Select this entry and click the Installbutton that appears. You may need to enter your password to install the software at this point. You should now see a progress indicator appear, which will disappear when installation is complete. Open a terminal (using Ctrl+Alt+T) and verify that you can run Python 3.5 from the command line: $ python3.5 Python 3.5.0+ (default, Oct 11 2015, 09:05:38) [GCC 5.2.1 20151010] on linux Type "help", "copyright", "credits" or "license" for more information. >>> Welcome to Python! The triple arrow prompt shows you that Python is waiting for your input. Now that Python is installed and running, you can immediately start using it. This is a good way to get to know the language, as well as a useful tool for experimentation and quick testing during normal development. This Python command line environment is a Read-Eval-Print-Loop. Python will READ whatever input we type in, EVALuate it, PRINT the result and then LOOP back to the beginning. You'll often hear it referred to simply as the "REPL". When started, the REPL will print some information about the version of Python you're running, and then it will give you a triple-arrow prompt. This prompt tells you that Python is waiting for you to type something. Within an interactive Python session you can enter fragments of Python programs and see instant results. Let's start with some simple arithmetic: >>> 2 + 2 4 >>> 6 * 7 42 As you can see, Python reads our input, evaluates it, prints the result, and loops around to do the same again. We can assign to variables in the REPL: >>> x = 5 Print their contents simply by typing their name: >>> x 5 Refer to them in expressions: >>> 3 * x 15 Within the REPL you can use the special underscore variable to refer to the most recently printed value, this being one of very few obscure shortcuts in Python: >>> _ 15 Or you can use the special underscore variable in an expression: >>> _ * 2 30 Note Remember that this useful trick only works at the REPL; the underscore doesn't have any special behavior in Python scripts or programs. Notice that not all statements have a return value. When we assigned 5 to x there was no return value, only the side-effect of bringing the variable x into being. Other statements have more visible side-effects. Try the following command: >>> print('Hello, Python') Hello, Python You’ll see that Python immediately evaluates and executes this command, printing the string Hello, Python and returning you to another prompt. It's important to understand that the response here is not the result of the expression evaluated and displayed by the REPL, but is a side-effect of the print() function. As an aside, print is one of the biggest differences between Python 2 and Python 3. In Python 3, the parentheses are required, whereas is Python 2 they were not. This is because in Python 3, print() is a function call. More on functions later. At this point, we should show you how to exit the REPL and get back to your system shell prompt. We do this by sending the end-of-file control character to Python, although unfortunately the means of sending this character varies across platforms. If you're on Mac or Linux, press Ctrl+D to exit. If you regularly switch between platforms and you accidentally press Ctrl+ Z on a Unix-a-like system, you will inadvertently suspend the Python interpreter and return to your operating system shell. To reactivate Python by making it a foreground process again, simply run the fg command: $ fg Now press Enter and couple of times to get the triple arrow Python prompt back: >>> Start your Python 3 interpreter: > python If on Windows or: $ python3 On Mac or Linux. The control flow structures of Python, such as for-loops, while-loops, and if-statements, are all introduced by statements which are terminated by a colon, indicating that the body of the construct is to follow. For example, for-loops require a body, so if you enter: >>> for i in range(5): ... Python will present you with a prompt of three dots to request that you provide the body. One distinctive (and sometimes controversial) aspect of Python is that leading whitespace is syntactically significant. What this means is that Python uses indentation levels, rather the braces used by other languages, to demarcate code blocks.By convention, contemporary Python code is indented by four spaces for each level. So when Python present us with the three dot prompt, we provide those four spaces and a statement to form the body of the loop: ... x = i * 10 Our loop body will contain a second statement, so after pressing Return at the next three dot prompt we'll enter another four spaces followed by a call to the built-in print() function: ... print(x) To terminate our block, we must enter a blank line into the REPL: ... With the block complete, Python executes the pending code, printing out the multiples of 10 less than 50: 0 10 20 30 40 Looking at at screenful of Python code, we can see how the indentation clearly matches — and in fact must match — the structure of the program which is as follows: Figure 1.1: Whitespaces in the code Even if we replace the code by gray lines, the structure of the program is clear as shown in the following image: Figure 2.2 : Replaced code with grey lines Each statement terminated by a colon starts a new line and introduces an additional level of indentation, which continues until a dedent restores the indentation to a previous level. Each level of indent is typically four spaces, although we'll cover the rules in more detail in a moment. Python's approach to significant whitespace has three great advantages: - It forces developers to use a single level of indentation in a code-block. This is generally considered good practice in any language because it makes code much more readable. - Code with significant whitespace doesn't need to be cluttered with unnecessary braces, and you never need to have code-standard debates about where the braces should go. All code-blocks in Python code are easily identifiable and everyone writes them the same way. - Significant whitespace requires that a consistent interpretation must be given to the structure of the code by the author, the Python runtime system and future maintainers who need to read the code. As a result you can never have code that contains a block from Python's point of view, but which doesn't look like it contains a block from a cursory human perspective. The rules for Python indentation can seem complex, but they are quite straightforward in practice: The whitespace you use can be either spaces or tabs. The general consensus is that spaces are preferable to tabs, and four spaces has become a standard in the Python community. One essential rule is NEVER to mix spaces and tabs. The Python interpreter will complain, and your colleagues will hunt you down. You are allowed to use different amounts of indentation at different times if you wish. The essential rule is that consecutive lines of code at the same indentation level are considered to be part of the same code block. There are some exceptions to these rules, but they almost always have to do with improving code readability in other ways, for example by breaking up necessarily long statements over multiple lines. This rigorous approach to code formatting is Programming as Guido intended it or, perhaps more appropriately, as Guido indented it! A philosophy of placing a high value on code qualities such as readability gets to the very heart of Python culture, something we'll take a short break to explore now. Many programming languages are at the center of a cultural movement. They have their own communities, values, practices, and philosophy, and Python is no exception. The development of the Python language itself is managed through a series of documents called Python Enhancement Proposals, or PEPs. One of the PEPs, called PEP 8, explains how you should format your code, and we follow its guidelines throughout this book. For example, it is PEP 8 which recommends that we use four spaces for indentation in new Python code. Another of these PEPs, called PEP 20 is called “The Zen of Python”. It refers to 20 aphorisms describing the guiding principles of Python, only 19 of which have been written down. Conveniently, the Zen of Python is never further away than the nearest Python interpreter, as it can always be accessed from the REPL by typing: >>>! Throughout this book we'll be highlighting particular nuggets of wisdom from the Zen of Python in moments of zen to understand how they apply to what we have learned. As we've just introduced Python significant indentation, this is a good time for our first moment of zen: Figure 1.1: Moment of zen In time, you'll come to appreciate Python's significant whitespace for the elegance it brings to your code, and the ease with which you can read other's. As mentioned earlier, Python comes with an extensive standard library, an aspect of Python that is often referred to as batteries included. The standard library is structured as modules, a topic we'll discuss in depth later. What's important at this stage is to know that you gain access to standard library modules by using the import keyword. The basic form of importing a module is simply the import keyword followed by a space and the name of the module. For example, lets see how we can use the standard library's math module to compute square roots. At the triple-arrow prompt we type the following command: >>> import math Since import is a statement which doesn't return a value, Python doesn't print anything if the import succeeds, and we're immediately returned to the prompt. We can access the contents of the imported module by using the name of the module, followed by a dot, followed by the name of the attribute in the module that you need. Like many object oriented languages the dot operator is used to drill down into object structures. Being expert Pythonistas, we have inside knowledge that the math module contains a function called sqrt(). Let's try to use the following command: >>> math.sqrt(81) 9.0 But how can we find out what other functions are available in the math module? The REPL has a special function help() which can retrieve any embedded documentation from objects for which documentation has been provided, such as standard library modules. To get help, simply type help at the prompt: >>> help Type help() for interactive help, or help(object) for help about object. We'll leave you to explore the first form — for interactive help — in your own time. Here we'll go for the second option and pass the math module as the object for which we want help: >>>) Return the arc cosine (measured in radians) of x. You can use the space-bar to page through the help, and if you're on Mac or Linux use the arrow keys to scroll up and down. Browsing through the functions, you'll can see that there's a math function, factorial, for computing factorials. Press Q to exit the help browser, and return us to the Python REPL. Now practice using help() to request specific help on the factorial function: >>> help(math.factorial) Help on built-in function factorial in module math: factorial(...) factorial(x) -> Integral Find x!. Raise a ValueError if x is negative or non-integral. Press Q to return to the REPL. Let's use factorial() a bit. The function accepts an integer argument and return an integer value: >>> math.factorial(5) 120 >>> math.factorial(6) 720 Notice how we need to qualify the function name with the module namespace. This is generally good practice, as it makes it abundantly clear where the function is coming from. That said, it can result in code that is excessively verbose. Let's use factorials to compute how many ways there are to draw three fruit from a set of five fruit using some math we learned in school: >>> n = 5 >>> k = 3 >>> math.factorial(n) / (math.factorial(k) * math.factorial(n - k)) 10.0 This simple expression is quite verbose with all those references to the math module. The Python import statement has an alternative form that allows us to bring a specific function from a module into the current namespace by using the from keyword: >>> from math import factorial >>> factorial(n) / (factorial(k) * factorial(n - k)) 10.0 This is a good improvement, but is still a little long-winded for such a simple expression. A third form of the import statement allows us to rename the imported function. This can be useful for reasons of readability, or to avoid a namespace clash. Useful as it is, though, we recommend that this feature be used infrequently and judiciously: >>> from math import factorial as fac >>> fac(n) / (fac(k) * fac(n - k)) 10.0 Remember that when we used factorial() alone it returned an integer. But our more complex expression above for calculating combinations is producing a floating point number. This is because we've used /, Python's floating-point division operator. Since we know our operation will only ever return integral results, we can improve our expression by using //, Python’s integer division operator: >>> from math import factorial as fac >>> fac(n) // (fac(k) * fac(n - k)) 10 What's notable is that many other programming languages would fail on the above expression for even moderate values of n. In most programming languages, regular garden variety signed integers can only store values less than {ParseError: KaTeX parse error: Expected 'EOF', got '}' at position 1: }̲2\times10^{31}}: >>> 2**31 - 1 2147483647 However, factorials grow so fast that the largest factorial you can fit into a 32-bit signed integer is 12! since 13! is too large: >>> fac(13) 6227020800 In most widely used programming languages you would need either more complex code or more sophisticated mathematics merely to compute how many ways there are to draw 3 fruits from a set of 13!. Python encounters no such problems and can compute with arbitrarily large integers, limited only by the memory in your computer. To demonstrate this further, let's try the larger problem of computing how many different pairs of fruit we can pick from 100 different fruits (assuming we can lay our hands on so many fruit!): >>> n = 100 >>> k = 2 >>> fac(n) // (fac(k) * fac(n - k)) 4950 Just to emphasize how large the size of the first term of that expression is, calculate 100! on it's own: >>> fac number is vastly larger even than the number of atoms in the known universe, with an awful lot of digits. If, like us, you're curious to know exactly how many digits, we can convert our integer to a text string and count the number of characters in it like this: >>> len(str(fac(n))) 158 That's definitely a lot of digits. And a lot of fruit. It also starts to show how Python's different data types — in this case, integers, floating point numbers, and text strings — work together in natural ways. In the next section we'll build on this experience and look at integers, strings, and other built-in types in more detail. Python comes with a number of built-in datatypes. These include primitive scalar types like integers as well as collection types like dictionaries. These built-in types are powerful enough to be used alone for many programming needs, and they can be used as building blocks for creating more complex data types. The basic built-in scalar types we'll look at are: int— signed, unlimited precision integers float— IEEE 754 floating-point numbers None— a special, singular null value bool— true/ falseboolean values For now we'll just be looking at their basic details, showing their literal forms and how to create them. We've already seen Python integers in action quite a lot. Python integers are signed and have, for all practical purposes, unlimited precision. This means that there is no pre-defined limit to the magnitude of the values they can hold. Integer literals in Python are typically specified in decimal: >>> 10 10 They may also be specified in binary with a 0b prefix: >>> 0b10 2 There may also be a octal, with a 0o prefix: >>> 0o10 8 If its a hexadecimal we use the 0x prefix: >>> 0x10 16 We can also construct integers by a call to the int constructor which can convert from other numeric types, such as floats, to integers: >>> int(3.5) 3 Note that, when using the int constructor, the rounding is always towards zero: >>> int(-3.5) -3 >>> int(3.5) 3 We can also convert strings to integers as follows: >>> int("496") 496 Be aware, though, that Python will throw an exception (much more on those later!) if the string doesn't represent an integer. You can even supply an optional number base when converting from a string. For example, to convert from base 3 simply pass 3 as the second argument to the constructor: >>> int("10000", 3) 81 Floating point numbers are supported in Python by the float type. Python floats are implemented as IEEE-754 double-precision floating point numbers with 53 bits of binary precision. This is equivalent to between 15 and 16 significant digits in decimal. Any literal number containing a decimal point is interpreted by Python as a float: 3.125 Scientific notation can be used, so for large numbers — such as {ParseError: KaTeX\ parse error: Expected 'EOF', got '}' at position 1: }̲3\times10^8{/}, the approximate speed of light in metres per second — we can write: >>> 3e8 300000000.0 and for small numbers like Planck's constant {ParseError: KaTeX parse error:\ Expected 'EOF', got '}' at position 1: }̲1.616\times10^} we can enter: >>> 1.616e-35 1.616e-35 Notice how Python automatically switches the display representation to the most readable form. As for integers, we can convert to floats from other numeric or string types using the float constructor. For example, the constructor can accept an int: >>> float(7) 7.0 The float constructor can also accept a string as follows: >>> float("1.618") 1.618 By passing certain strings to the float constructor, we can create the special floating point value NaN (short for Not aNumber) and also positive and negative infinity: >>> float("nan") nan >>> float("inf") inf >>> float("-inf") -inf Python has a special null value called None, spelled with a capital N. None is frequently used to represent the absence of a value. The Python REPL never prints None results, so typing None into the REPL has no effect: >>> None >>> The null value None can be bound to variable names just like any other object: >>> a = None and we can test whether an object is None by using Python's is operator: >>> a is None True We can see here that the response is True, which brings us conveniently on to the bool type. The bool type represents logical states and plays an important role in several of Python's control flow structures, as we'll see shortly. As you would expect there are two bool values, True and False, both spelled with initial capitals: >>> True True >>> False False There is also a bool constructor which can be used to convert from other types to bool. Let's look at how it works. For ints, zero is considered falsey and all other values truthy: >>> bool(0) False >>> bool(42) True >>> bool(-1) True We see the same behavior with floats where only zero is considered falsey: >>> bool(0.0) False >>> bool(0.207) True >>> bool(-1.117) True >>> bool(float("NaN")) True When converting from collections, such as strings or lists, only empty collections are treated as falsey. When converting from lists — which we'll look at shortly — we see that only the empty list (shown here in it's literal form of []) evaluates to False: >>> bool([]) False >>> bool([1, 5, 9]) True Similarly, with strings only the empty string, "", evaluates to False when passed to bool: >>> bool("") False >>> bool("Spam") True In particular, you cannot use the bool constructor to convert from string representations of True and False: >>> bool("False") True Since the string False is not empty, it will evaluate to True. These conversions to bool are important because they are widely used in Python if-statements and while-loops which accept bool values in their condition. Boolean values are commonly produced by Python’s relational operators which can be used for comparing objects. Two of the most widely used relational operators are Python's equality and inequality tests, which actually test for equivalence or inequivalence of values. That is, two objects are equivalent if one could use used in place of the other. We'll learn more about the notion of object equivalence later in the book. For now, we'll compare simple integers. Let's start by assigning — or binding — a value to a variable g: >>> g = 20 We test for equality with == as shown in the following command: >>> g == 20 True >>> g == 13 False For inequality we use !=: >>> g != 20 False >>> g != 13 True We can also compare the order of quantities using the rich comparison operators. Use < to determine if the first argument is less than the second: >>> g < 30 True Likewise, use > to determine if the first is greater than the second: >>> g > 30 False You can test less-than or equal-to with <=: >>> g <= 20 True We can use the greater-than or equal-to with >= ,shown as follows: >>> g >= 20 True If you have experience with relational operators from other languages, then Python's operators are probably not surprising at all. Just remember that these operators are comparing equivalence, not identity, a distinction we'll cover in detail in coming chapters. Now that we've examined some basic built-in types, let's look at two important control flow structures which depend on conversions to the bool type: if-statements and while-loops. Conditional statements allow us to branch execution based on the value of an expression. The form of the statement is the if keyword, followed by an expression, terminated by a colon to introduce a new block. Let's try this at the REPL: >>> if True: Remembering to indent four spaces within the block, we add some code to be executed if the condition is True, followed by a blank line to terminate the block: ... print("It's true!") ... It's true! At this point the block will execute, because self-evidently the condition is True. Conversely, if the condition is False, the code in the block does not execute: >>> if False: ... print("It's true!") ... >>> The expression used with the if-statement will be converted to a bool just as if the bool() constructor had been used, so: >>> if bool("eggs"): ... print("Yes please!") ... Yes please! If the value is exactly equivalent to something, we then use the if command as follows: >>> if "eggs": ... print("Yes please!") ... Yes please! Thanks to this useful shorthand, explicit conversion to bool using the bool constructor is rarely used in Python. The if-statement supports an optional else clause which goes in a block introduced by the else keyword (followed by a colon) which is indented to the same level as the if keyword. Let's start by creating (but not finishing) an if-block: >>> h = 42 >>> if h > 50: ... print("Greater than 50") To start the else block in this case, we just omit the indentation after the three dots: ... else: ... print("50 or smaller") ... 50 or smaller For multiple conditions you might be tempted to do something like this: >>> if h > 50: ... print("Greater than 50") ... else: ... if h < 20: ... print("Less than 20") ... else: ... print("Between 20 and 50") ... Between 20 and 50 Whenever you find yourself with an else-block containing a nested if statement, like this, you should consider using Python's elif keyword which is a combined else-if. As the Zen of Python reminds us, Flat is better than nested: >>> if h > 50: ... print("Greater than 50") ... elif h < 20: ... print("Less than 20") ... else: ... print("Between 20 and 50") ... Between 20 and 50 This version is altogether easier to read. Python has two types of loop: for-loops and while-loops. We've already briefly encountered for-loops back when we introduced significant whitespace, and we'll return to them soon, but right now we'll cover while-loops. The While-loops in Python are introduced by the while keyword, which is followed by a boolean expression. As with the condition for if-statements, the expression is implicitly converted to a boolean value as if it has been passed to the bool() constructor. The while statement is terminated by a colon because it introduces a new block. Let's write a loop at the REPL which counts down from five to one. We'll initialize a counter variable called c to five, and keep looping until we reach zero. Another new language feature here is the use of an augmented-assignment operator, -=, to subtract one from the value of the counter on each iteration. Similar augmented assignment operators exist for the other basic math operations such as addition and multiplication: >>> c = 5 >>> while c != 0: ... print(c) ... c -= 1 ... 5 4 3 2 1 Because the condition — or predicate — will be implicitly converted to bool, just as if a call to the bool() constructor were present, we could replace the above code with the following version: >>> c = 5 >>> while c: ... print(c) ... c -= 1 ... 5 4 3 2 1 This works because the conversion of the integer value of c to bool results in True until we get to zero which converts to False. That said, to use this short form in this case might be described as un-Pythonic, because, referring back to the Zen of Python, explicit is better than implicit. We place higher value of the readability of the first form over the concision of the second form. The While-loops are often used in Python where an infinite loop is required. We achieve this simply by passing True as the predicate expression to the while construct: >>> while True: ... print("Looping!") ... Looping! Looping! Looping! Looping! Looping! Looping! Looping! Looping! Now you're probably wondering how we get out of this loop and regain control of our REPL! Simply press Ctrl+C: Looping! Looping! Looping! Looping! Looping! Looping!^C Traceback (most recent call last): File "<stdin>", line 2, in <module> KeyboardInterrupt >>> Python intercepts the key stroke and raises a special exception which terminates the loop. We'll be talking much more about what exceptions are, and how to use them, later in Chapter 6, Exceptions. Many programming languages support a loop construct which places the predicate test at the end of the loop rather than at the beginning. For example, C, C++, C# and Java support the do-while construct. Other languages have repeat-until loops instead or as well. This is not the case in Python, where the idiom is to use while True together with an early exit, facilitated by the break statement. The break statement jumps out of the loop — and only the innermost loop if severals loops have been nested — continuing execution immediately after the loop body. Let's look at an example of break, introducing a few other Python features along the way, and examine it line-by-line: >>> while True: ... response = input() ... if int(response) % 7 == 0: ... break ... We start with a while True: for an infinite loop. On the first statement of the while block we use the built-in input() function to request a string from the user. We assign that string to a variable called response. We now use an if-statement to test whether the value provided is divisible by seven. We convert the response string to an integer using the int() constructor and then use the modulus operator, %, to divide by seven and give the remainder. If the remainder is equal to zero, the response was divisible by seven, and we enter the body of the if-block. Within the if-block, now two levels of indentation deep, we start with eight spaces and use the break keyword. break terminates the inner-most loop — in this case the while-loop — and causes execution to jump to the first statement after the loop. Here, that statement is the end of the program. We enter a blank line at the three dots prompt to close both the if-block and the while-block. Our loop will start executing, and will pause at the call to input() waiting for us to enter a number. Let's try a few: 12 67 34 28 >>> As soon as we enter a number divisible by seven the predicate becomes True, we enter the if-block, and then we literally break out of the loop to the end of program, returning us to the REPL prompt. - Starting out with Python Obtaining and installing Python 3 Starting the Read-Eval-Print-Loop or REPL Simple arithmetic Creating variables by binding objects to names Printing with the built-in print()function Exiting the REPL with Ctrl+Z (Windows) or Ctrl+D (Unix) Being Pythonic Significant indentation PEP 8 - The Style Guide for Python Code PEP 20 - The Zen of Python Importing modules with the import statement in various forms Finding and browsing Basic types and control flow ints, floats, None, and bool, plus conversions between them Relational operators for equality and ordering tests The if-statements with else and elifblocks The while-loops with implicit conversion to bool Interrupting infinite loops with Ctrl+C Breaking out of loops with break Requesting text from the user with input() Augmented assignment operators
https://www.packtpub.com/product/the-python-apprentice/9781788293181
CC-MAIN-2022-27
en
refinedweb
How to Use Hook_init in Drupal 8? In Drupal 8, most of the Hooks such as hook_init, hook_boot are removed from the Drupal 8. These Hooks are replaced with Event Subscriber in Drupal 8. If you are performing few actions such as redirect, add CSS, add JS or any other modification on request, it can be done by registering Event Subscriber. As we know Drupal 8 introduces Symfony Event Components and includes many Symfony components in Drupal 8 Core. In future versions of Drupal 8, Symfony Events will play a vital role and will enhance the website performance, functionality etc. Available Events in Drupal 8 Symfony 2 framework are included in Drupal 8 core. Drupal 8 uses Symfony Event Components to do the same. Following kernel events are available in Drupal 8: - KernelEvents::CONTROLLER The CONTROLLER event occurs once a controller was found for handling a request. - KernelEvents::EXCEPTION The EXCEPTION event occurs when an uncaught exception appears. - KernelEvents::FINISH_REQUEST The FINISH_REQUEST event occurs when a response was generated for a request. - KernelEvents::REQUEST The REQUEST event occurs at the very beginning of request dispatching. - KernelEvents::RESPONSE The RESPONSE event occurs once a response was created for replying to a request. - KernelEvents::TERMINATE The TERMINATE event occurs once a response was sent. - KernelEvents::VIEW The VIEW event occurs when the return value of a controller is not a Response instance. How to create Module using Events Subscriber in Drupal 8? I am creating a custom redirect module and redirecting the pattern based URL to new URL. For example : to. Module name : customerredirect Steps 1: Create a custom redirect folder in the modules folder. Steps 2: Create readme.md file that will have introduction,requirements,recommended modules and etc about module. Steps 3: Create customredirect.info.yml file that will have information about module like name, type, package, version etc. Steps 4: Create customredirect.routing.yml file and add routing for this module. Steps: 5: Create customredirect.services.yml files and define the required services. Steps 6: Create a src/EventSubscriber folder. Steps 7: Open customredirect.info.yml file and paste the below code: [php] name: Customredirect type: module description: ‘Custom redirect module’ package: Custom version: ‘8.1.0’ core: 8.x [/php] Steps 8: Open customredirect.routing.yml and add below code: [php] customredirect.render: requirements: _permission: ‘access content’ [/php] I have added only the access permission because there is no menu for this module in admin. Steps 9: Open customredirect.services.yml file and add below code: [php] services: customredirect_event_subscriber: class: Drupal\customredirect\EventSubscriber\CustomredirectSubscriber – {name: event_subscriber} [/php] Note: Annotation should be proper because it follows the Symfony annotation mechanism to read configuration. Steps 10: Create a CustomredirectSubscriber.php file in src/EventSubscriber folder and paste the following code: [php] namespace Drupal\customredirect\EventSubscriber; use Symfony\Component\EventDispatcher\EventSubscriberInterface; use Symfony\Component\HttpFoundation\RedirectResponse; use Symfony\Component\HttpKernel\Event\GetResponseEvent; use Symfony\Component\HttpKernel\KernelEvents; use Drupal\Core\Url; /** * Redirect .html pages to corresponding Node page. */ class CustomredirectSubscriber implements EventSubscriberInterface { /** @var int */ private $redirectCode = 301; /** * Redirect pattern based url * @param GetResponseEvent $event */ public function customRedirection(GetResponseEvent $event) { $request = \Drupal::request(); $requestUrl = $request->server->get(‘REQUEST_URI’, null); /** * Here i am redirecting the about-us.html to respective /about-us node. * Here you can implement your logic and search the URL in the DB * and redirect them on the respective node. */ if ($requestUrl==’/about-us.html’) { $response = new RedirectResponse(‘/about-us’, $this->redirectCode); $response->send(); exit(0); } } /** * Listen to kernel.request events and call customRedirection. * {@inheritdoc} * @return array Event names to listen to (key) and methods to call (value) */ public static function getSubscribedEvents() { $events[KernelEvents::REQUEST][] = array(‘customRedirection’); return $events; } } [/php] Conclusion: In Drupal 8, we can use Event Subscriber to replace hook_init or hook_boot and achieve the same functionality that was done by hook_init or hook_boot in Drupal 7.
https://www.tothenew.com/blog/how-to-use-hook_init-in-drupal-8/
CC-MAIN-2022-27
en
refinedweb
@Generated(value="OracleSDKGenerator", comments="API Version: 20200501") public class ChangeNetworkLoadBalancerCompartmentRequest extends BmcRequest<ChangeNetworkLoadBalancerCompartmentDetails> getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue clone, finalize, getClass, notify, notifyAll, wait, wait, wait public ChangeNetworkLoadBalancerCompartmentRequest() public String getNetworkLoadBalancerId() public ChangeNetworkLoadBalancerCompartmentDetails getChangeNetworkLoadBalancerCompartmentDetails() The configuration details for moving a network load balancer to a different compartment. public String getOpcRequestId() The unique Oracle-assigned identifier for the request. If you must contact Oracle about a particular request, then provide the request identifier. public String getOp. public ChangeNetworkLoadBalancerCompartmentDetails getBody$() Alternative accessor for the body parameter. getBody$in class BmcRequest<ChangeNetworkLoadBalancerCompartmentDetails> public ChangeNetworkLoadBalancerCompartmentRequest.Builder toBuilder() Return an instance of ChangeNetworkLoadBalancerCompartmentRequest.Builder that allows you to modify request properties. ChangeNetworkLoadBalancerCompartmentRequest.Builderthat allows you to modify request properties. public static ChangeNetworkLoadeNetworkLoadBalancerCompartmentDetails> public int hashCode() BmcRequest Uses invocationCallback and retryConfiguration to generate a hash. hashCodein class BmcRequest<ChangeNetworkLoadBalancerCompartmentDetails>
https://docs.oracle.com/en-us/iaas/tools/java/2.33.0/com/oracle/bmc/networkloadbalancer/requests/ChangeNetworkLoadBalancerCompartmentRequest.html
CC-MAIN-2022-27
en
refinedweb
Technical Articles Easy Descriptive Statistics with Python and SAP HANA Cloud As Solution Advisors, we occasionally participate in POCs which require us to receive and analyze real customer data. Often, the first step to developing an analysis plan is to perform some exploratory data analysis on the data to determine the distribution of values and uniqueness of each column. Since we are dealing with real customer use-cases the datasets tend to be very wide, often with missing data. It is important to be able to get a sense of the quality of the dataset quickly since we may have to get back to the customer with our questions about the dataset before the project and analysis planning can begin. I discovered a nice trick to generate this information within SAP HANA Cloud using the hana_ml Python library and wanted to share it as my first blog post. 😊 Problem: SAP Data Intelligence Fact Sheet cannot be exported The fact sheets on profiled datasets in SAP Data Intelligence provide descriptive statistics on each column but we are not able to save this information as a dataset to manipulate and report on. SAP Data Intelligence fact sheet provides descriptive statistics However, the describe() method in the hana_ml Python library provides a simple way to generate this summary for us. This summary is generated natively in SAP HANA so it is fast as well. Those familiar with Python may already be familiarity with describe(). For a pandas dataFrame, calling describe will produce a nice table with descriptive statistics like min, max, mean, and quartile values of each column. The hana_ml Python library has also implemented this method and it is a handy way to generate descriptive statistics on any SAP HANA table. It is very flexible as well, allowing you to save the results natively as an SAP HANA table or bring it into your Python environment. Finally, it is a great way to understand how the Python wrapper works on top of SAP HANA Cloud. SAP HANA DataFrames are SQL statements There are already several great blogs on the SAP HANA DataFrame but it allows you to utilize your Python knowledge to work with your SAP HANA environment. The SAP HANA DataFrame provides a pointer to the data in HANA without storing any of the physical data. It is very flexible and allows you to perform data manipulations and transformations easily. It also let’s you move your data from your HANA environment and Python environment easily. With collect(), you can materialize your SAP HANA dataFrame as a Pandas dataFrame to offer even greater flexibility. HANA dataFrame represents a table, column or SQL. Source: Use SAP HANA_ml Python library to import local files (e.g. csv, Excel) In addition to generating descriptive statistics on your SAP HANA tables, you can also use the hana_ml library to load local files (e.g. .csv or Excel) to SAP HANA cloud automatically. To get started, let’s first import the necessary libraries and establish our connection to SAP HANA Cloud. First, let’s import the necessary libraries: import pandas as pd import hana_ml.dataframe as dataframe print('HANA ML version: ' + hana_ml.__version__) print('Pandas version: ' + pd.__version__) Next, set up our SAP HANA Cloud connection: # Import libraries import hana_ml.dataframe as dataframe # Create connection to HANA Cloud # Instantiate connection object conn = dataframe.ConnectionContext(address = '<Your tenant info here for example something(4) Create connection to HANA tenant and test connection The sample file I used for this demo is the 2018 file from the Airline Delay and Cancellation Data, 2019-2018. I wanted to test the performance vs. Pandas but you can use whatever file you have handy. Using Pandas, we can import the file and run describe() to get summary statistics in Python. import pandas as pd df = pd.read_csv('../datasets/airline-delay/2018.csv') Pandas describe method To do the same in SAP HANA Cloud, we’ll first need to convert the Pandas dataFrame to an SAP HANA table. We can use the create_dataframe_from_pandas()to do this. It will automatically upload the Pandas DataFrame to SAP HANA to create a table or view, and an SAP HANA DataFrame as well. Although we do not need to specify the datatype formats, the automatic conversion is not the most efficient. For example, pandas object types are converted to NVARCHAR(5000). However, we can use Pandas to provide the format lengths of string columns to minimize storage by calculating the maximum length of each string column. # Subset dataframe to only object (string) types df2 = df.select_dtypes(include='object') # Create table of columns and data types r = df2.dtypes.to_frame('dtypes').reset_index() # Filter to string columns sv = r[r['dtypes']=='object'].set_index('index') # Iterate for each string column for i in sv.index.to_list(): # Get max length l = df2[i].str.len().max().astype(int) # Write max length to columns table sv.loc[i, 'len'] = f'NVARCHAR({l})' # Create dictionary of columns:length hana_fmt = sv['len'].to_dict() hana_fmt Find max length of character strings and create format dictionary Now, we can use create_dataframe_from_pandas() to create the SAP HANA table. There are many optional options as well, be sure to check out the docs. # conn - Connection to HANA Cloud # pandas_df - Pandas dataFrame to upload # table_name - HANA table name to create # table_structure - HANA table format dictionary # allow_bigint - Allows mapping to bigint or int # force - replace HANA table if exists dataframe.create_dataframe_from_pandas(connection_context = conn, pandas_df = df, table_name = 'AIRLINES_2018', allow_bigint = True, force = True) Creating HANA Cloud table using dtype formats The code above creates the SAP HANA dataFrame (“df_hana”) which points to the “AIRLINES_2018” SAP HANA table but you can also reference tables like below. We can call the describe() on that SAP HANA dataFrame to generate the descriptive stats we need. The code below brings the descriptive statistics into Pandas with collect() %%time stats = conn.table('AIRLINE_2018') stats.describe().collect() SAP HANA describe() collected to Pandas However, if we wanted this information in SAP HANA we can do so without bringing into Pandas and saving directly as a SAP HANA table. df_hana.describe().save('STATS_AIRLINE2018') We can see this table created in the SAP HANA Database Explorer: Descriptive statistics saved as table in SAP HANA Cloud We can understand what is happening behind the scenes with the select_statement. This shows the underlying SQL behind the SAP HANA DataFrame. df_hana.describe().select_statement Underlying SQL statement we do not have to write As you can see, the SAP hana_ml Python library is very flexible and allows you to combine the flexibility to Python with the power of SAP HANA. You can use it to automate the time-consuming task of running descriptive statistics to accelerate the process of data discovery and exploratory data analysis. Special thanks to Andreas Forster, Marc Daniau, and Onno Bagijn and the other bloggers for sharing their invaluable knowledge. Great Blog Jeremy Yu . Very detailed step by step instructions.. Looking forward to the next one.. in this series. me too The article makes me feel like I need to learn more about this field
https://blogs.sap.com/2021/05/24/easy-descriptive-statistics-with-python-and-sap-hana-cloud/
CC-MAIN-2022-27
en
refinedweb
This topic discusses steps you can take to troubleshoot and fix problems with the Cassandra datastore. Cassandra is a persistent datastore that runs in the cassandra component of the hybrid runtime architecture. See also Runtime service configuration overview. Cassandra pods are stuck in the Pending state Symptom When starting up, the Cassandra pods remain in the Pending state. Error message When you use kubectl to view the pod states, you see that one or more Cassandra pods are stuck in the Pending state. The Pending state indicates that Kubernetes is unable to schedule the pod on a node: the pod cannot be created. For example: kubectl get pods -n namespaceNAME READY STATUS RESTARTS AGE adah-resources-install-4762w 0/4 Completed 0 10m apigee-cassandra-0 0/1 Pending 0 10m ... Possible causes A pod stuck in the Pending state can have multiple causes. For example: Diagnosis Use kubectl to describe the pod to determine the source of the error. For example: kubectl -n namespace describe pods pod_name For example: kubectl -n apigee describe pods apigee-cassandra-0 The output may show one of these possible problems: - If the problem is insufficient resources, you will see a Warning message that indicates insufficient CPU or memory. - If the error message indicates that the pod has unbound immediate PersistentVolumeClaims (PVC), it means the pod is not able to create its Persistent volume. Resolution Insufficient resources Modify the Cassandra node pool so that it has sufficient CPU and memory resources. See Resizing a node pool for details. Persistent volume not created If you determine a persistent volume issue, describe the PersistentVolumeClaim (PVC) to determine why it is not being created: - List the PVCs in the cluster: kubectl -n namespace get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cassandra-data-apigee-cassandra-0 Bound pvc-b247faae-0a2b-11ea-867b-42010a80006e 10Gi RWO standard 15m ... - Describe the PVC for the pod that is failing. For example, the following command describes the PVC bound to the pod apigee-cassandra-0: kubectl apigee describe pvc cassandra-data-apigee-cassandra-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 3m (x143 over 5h) persistentvolume-controller storageclass.storage.k8s.io "apigee-sc" not found Note that in this example, the StorageClass named apigee-scdoes not exist. To resolve this problem, create the missing StorageClass in the cluster, as explained in Change the default StorageClass. See also Debugging Pods. Cassandra pods are stuck in the CrashLoopBackoff state Symptom When starting up, the Cassandra pods remain in the CrashLoopBackoff state. Error message When you use kubectl to view the pod states, you see that one or more Cassandra pods are in the CrashLoopBackoff state. This state indicates that Kubernetes is unable to create the pod. For example: kubectl get pods -n namespaceNAME READY STATUS RESTARTS AGE adah-resources-install-4762w 0/4 Completed 0 10m apigee-cassandra-0 0/1 CrashLoopBackoff 0 10m ... Possible causes A pod stuck in the CrashLoopBackoff state can have multiple causes. For example: Diagnosis Check the Cassandra error log to determine the cause of the problem. - List the pods to get the ID of the Cassandra pod that is failing: kubectl get pods -n namespace - Check the failing pod's log: kubectl logs pod_id -n namespace Resolution Look for the following clues in the pod's log: Data center differs from previous data center If you see this log message: Cannot start node if snitch's data center (us-east1) differs from previous data center - Check if there are any stale or old PVC in the cluster and delete them. - If this is a fresh install, delete all the PVCs and re-try the setup. For example: kubectl -n namespace get pvc kubectl -n namespace delete pvc cassandra-data-apigee-cassandra-0 Truststore directory not found If you see this log message: Caused by: java.io.FileNotFoundException: /apigee/cassandra/ssl/truststore.p12 (No such file or directory) Verify the key and certificates if provided in your overrides file are correct and valid. For example: cassandra: sslRootCAPath: path_to_root_ca-file sslCertPath: path-to-tls-cert-file sslKeyPath: path-to-tls-key-file Node failure Symptom When starting up, the Cassandra pods remain in the Pending state. This problem can indicate an underlying node failure. Diagnosis - Determine which Cassandra pods are not running: $ kubectl get pods -n your_namespace NAME READY STATUS RESTARTS AGE cassandra-0 0/1 Pending 0 13s cassandra-1 1/1 Running 0 8d cassandra-2 1/1 Running 0 8d - Check the worker nodes. If one is in the NotReady state, then that is the node that has failed: kubectl get nodes -n your_namespace NAME STATUS ROLES AGE VERSION ip-10-30-1-190.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-1-22.ec2.internal Ready master 8d v1.13.2 ip-10-30-1-36.ec2.internal NotReady <none> 8d v1.13.2 ip-10-30-2-214.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-2-252.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-2-47.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-3-11.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-3-152.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-3-5.ec2.internal Ready <none> 8d v1.13.2 Resolution - Remove the dead Cassandra pod from the cluster. $ kubectl exec -it apigee-cassandra-0 -- nodetool status $ kubectl exec -it apigee-cassandra-0 -- nodetool removenode deadnode_hostID - Remove the VolumeClaim from the dead node to prevent the Cassandra pod from attempting to come up on the dead node because of the affinity: kubectl get pvc -n your_namespace kubectl delete pvc volumeClaim_name -n your_namespace - Update the volume template and create PersistentVolume for the newly added node. The following is an example volume template: apiVersion: v1 kind: PersistentVolume metadata: name: cassandra-data-3 spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /apigee/data nodeAffinity: "required": "nodeSelectorTerms": - "matchExpressions": - "key": "kubernetes.io/hostname" "operator": "In" "values": ["ip-10-30-1-36.ec2.internal"] - Replace the values with the new hostname/IP and apply the template: kubectl apply -f volume-template.yaml
https://cloud.google.com/apigee/docs/hybrid/v1.1/ts-cassandra
CC-MAIN-2022-27
en
refinedweb
NAME SYNOPSIS DESCRIPTION RETURN VALUE CAVEATS SEE ALSO pmemobj_openU()/pmemobj_openW(), pmemobj_createU()/pmemobj_createW(), pmemobj_close(), pmemobj_checkU()/pmemobj_checkW() pmemobj_set_user_data(), pmemobj_get_user_data() #include <libpmemobj.h> PMEMobjpool *pmemobj_openU(const char *path, const char *layout); PMEMobjpool *pmemobj_openW(const wchar_t *path, const char *layout); PMEMobjpool *pmemobj_createU(const char *path, const char *layout, size_t poolsize, mode_t mode); PMEMobjpool *pmemobj_createW(const wchar_t *path, const char *layout, size_t poolsize, mode_t mode); void pmemobj_close(PMEMobjpool *pop); int pmemobj_checkU(const char *path, const char *layout); int pmemobj_checkW(const wchar_t *path, const char *layout); void pmemobj_set_user_data(PMEMobjpool *pop, void *data); void *pmemobj_get_user_data(PMEMobjpool *pop); NOTE: The PMDK API supports UNICODE. If the PMDK_UTF8_API macro is defined, basic API functions are expanded to the UTF-8 API with postfix U. Otherwise they are expanded to the UNICODE API with postfix W. To use the pmem-resident transactional object store provided by libpmemobj(7), a memory pool must first be created with the pmemobj_createU()/pmemobj_createW() function described below. Existing pools may be opened with the pmemobj_openU()/pmemobj_openW() function. None of the three functions described below are thread-safe with respect to any other libpmemobj(7) function. In other words, when creating, opening or deleting a pool, nothing else in the library can happen in parallel, and therefore these functions should be called from the main thread. Once created, the memory pool is represented by an opaque handle, of type PMEMobjpool*, which is passed to most of the other libpmemobj(7) functions. Internally, libpmemobj(7) object memory API provided by libpmemobj(7). The pmemobj_createU()/pmemobj_createW()U()/pmemobj_openW() is called. The layout name, including the terminating null byte ('\0'), cannot be longer than PMEMOBJ_MAX_LAYOUT as defined in <libpmemobj.h>. A NULL layout is equivalent to using an empty string as a layout name.obj_createU()/pmemobj_createW(), and then specifying poolsize as zero. In this case pmemobj_createU()/pmemobj_createW()U()/pmemobj_createW(); however, the recommended method for creating pool sets is with the pmempool(1) utility. When creating a pool set consisting of multiple files, the path argument passed to pmemobj_createU()/pmemobj_createW()U()/pmemobj_openW() function opens an existing object store memory pool. Similar to pmemobj_createU()/pmemobj_createW(), path must identify either an existing obj memory pool file, or the set file used to create a pool set. If layout is non-NULL, it is compared to the layout name provided to pmemobj_createU()/pmemobj_createW() when the pool was first created. This can be used to verify that the layout of the pool matches what was expected. The application must have permission to open the file and memory map itobj_close() function closes the memory pool indicated by pop and deletes the memory pool handle. The object store itself lives on in the file that contains it and may be re-opened at a later time using pmemobj_openU()/pmemobj_openW() as described above. The pmemobj_checkU()/pmemobj_checkW() function performs a consistency check of the file indicated by path. pmemobj_checkU()/pmemobj_checkW(). The pmemobj_createU()/pmemobj_createW() function returns a memory pool handle to be used with most of the functions in libpmemobj(7). On error it returns NULL and sets errno appropriately. The pmemobj_openU()/pmemobj_openW() function returns a memory pool handle to be used with most of the functions in libpmemobj(7). If an error prevents the pool from being opened, or if the given layout does not match the pool’s layout, pmemobj_openU()/pmemobj_openW() returns NULL and sets errno appropriately. The pmemobj_close() function returns no value. The pmemobj_checkU()/pmemobj_checkW() function returns 1 if the memory pool is found to be consistent. Any inconsistencies found will cause pmemobj_checkU()/pmemobj_checkW()U()/pmemobj_checkW() returns -1 and sets errno if it cannot perform the consistency check due to other errors. Not all file systems support posix_fallocate(3). pmemobj_createU()/pmemobj_createW() will fail if the underlying file system does not support posix_fallocate(3). On Windows if pmemobj_createU()/pmemobj_createW() is called on an existing file with FILE_ATTRIBUTE_SPARSE_FILE and FILE_ATTRIBUTE_COMPRESSED set, they will be removed, to physically allocate space for the pool. This is a workaround for _chsize() performance issues. creat(2), msync(2), pmem_is_pmem(3), pmem_persist(3), posix_fallocate(3), libpmem(7), libpmemobj(7) and The contents of this web site and the associated GitHub repositories are BSD-licensed open source.
https://pmem.io/pmdk/manpages/windows/v1.10/libpmemobj/pmemobj_open.3/
CC-MAIN-2022-27
en
refinedweb
In this section we will discuss about the Java IO FileOutputStream.In this section we will discuss about the Java IO FileOutputStream. In this section we will discuss about the Java IO FileOutputStream. FileOutputStream is a class of java.io package which facilitate to write output stream data to a file. The FileOutputStream writes a streams of raw bytes. OutputStream is a base class of FileOutputStream class. To write output streams there are several constructors of FileOutputStream is provided to the developer which facilitate to write streams in various ways Some are as follows : Commonly used methods of FileOutputStream class Example : Here an example is being given which demonstrates how to write the data output stream in to the file using FileOutputStream. For this I have created two text files named "abc.txt" and "xyz.txt". abc.txt is a text file that contains some data which will be further read by the program and the xyz.txt is a blank text file into which the text read from the abc.txt file will be written. Now created a Java class named WriteFileOutrputStreamExample.java where I have used the FileInputStream to read the input stream from the file 'abc.txt' and used FileOutputStream to write the output stream data into the 'xyz.txt'. To write the specified bytes into the xyz.txt file I have used the write(int ....) method of the FileOutputStream class. And finally close all the input stream and output stream that the resources associated with this stream can be released. Source Code /*This example demonstrates that how to write the data output stream to the FileOutputStream*/ import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WriteFileOutputStreamExample { public static void main(String args[]) { FileInputStream fis = null; FileOutputStream fos = null; try { fis = new FileInputStream("abc.txt"); fos = new FileOutputStream("xyz.txt"); main }// end class Output : When you will execute the above example a message will print on the console and the content of abc.txt file will be written to the xyz.txt file. Following steps are required to execute the above example : 1. First compile the WriteFileOutputStreamExample.java using javac command provided in the JDK as javac WriteFileOutputStreamExample.java 2. After successfully compilation execute the class by using java command provided in the JDK. as java WriteFileOutputStreamExample Ads
https://www.roseindia.net/java/example/java/io/bytefileoutputstream.shtml
CC-MAIN-2022-27
en
refinedweb
#include <CGAL/Triangulation.h> This class implements triangulations of point sets in dimension \( d \).. The triangulation deduces its maximal dimension from the type TriangulationTraits_::Dimension. This dimension has to match the dimension returned by TriangulationDataStructure_::maximal_dimension(). The information in the iostream is: the current dimension, the number of finite vertices, the non-combinatorial information about vertices (point, etc.), the number of full cells, the indices of the vertices of each full cell, plus the non-combinatorial information about each full cell, then the indices of the neighbors of each full cell, where the index corresponds to the preceding list of full cells. Triangulation_data_structure<Dimensionality, TriangulationDSVertex_, TriangulationDSFullCell_> Delaunay_triangulation<DelaunayTriangulationTraits_, TriangulationDataStructure_> This indicates whether the maximal dimension is static (i.e. if the type of Maximal_dimension is CGAL::Dimension_tag<int dim>) or dynamic (i.e. if the type of Maximal_dimension is CGAL::Dynamic_dimension_tag). In the latter case, the dim parameter passed to the constructor of the class is used. A point in Euclidean space. Note that in the context of a Regular_triangulation class (which derives from this class), TriangulationTraits_::Point_d is a weighted point. Used by Triangulation to specify which case occurs when locating a point in the triangulation. Instantiates a triangulation with one vertex (the vertex at infinity). See the description of the nested type Maximal_dimension above for an explanation of the use of the parameter dim. The triangulation stores a copy of the geometric traits gt. This is a function for debugging purpose. Returns true if and only if all finite full cells incident to v have positive orientation. The verbose parameter is not used. Contracts the Face f to a single vertex at position p. Returns a handle to that vertex. fmust be a triangulation of a sphere of dimension tr. current_dimension()). Returns true if the triangulation has no finite vertex. Returns false otherwise. Inserts the points found in range [s,e) in the triangulation. Returns the number of vertices actually inserted. (If several vertices share the same position in space, only the vertex that was actually inserted is counted.) Inserts point p in the triangulation. Returns a Vertex_handle to the vertex of the triangulation with position p. Prior to the actual insertion, p is located in the triangulation; hint is used as a starting place for locating p. Inserts point p into the triangulation and returns a handle to the Vertex at that position. The action taken depends on the value of loc_type: ON_VERTEX Vertexdescribed by fis set to p. IN_FACE pis inserted in the Face f. IN_FACET pis inserted in the Facet ft. pis inserted in the triangulation according to the value of loc_type, using the full cell c. This method is used internally by the other insert() methods. Inserts point p in the triangulation. pmust lie in the relative interior of f. Inserts point p in the triangulation. pmust lie in the relative interior of ft. Inserts point p in the triangulation. pmust lie in the interior of c. Removes the full cells in the range \( C=\) [s, e), inserts a vertex at position p and fills the hole by connecting each face of the boundary to p. A Vertex_handle to the new Vertex is returned. The facet ft must lie on the boundary of \( C\) and its defining full cell, tr. full_cell(ft) must lie inside \( C\). Handles to the newly created full cells are output in the out output iterator. pin its interior and must not contain any vertex of the triangulation. Inserts point p in the triangulation. pmust lie outside the affine hull of tr. Inserts point p in the triangulation. pmust lie outside the convex hull of tr. The half-space defined by the infinite full cell cmust contain p. This is a function for debugging purpose. Partially checks whether tr is a triangulation. This function returns true if the combinatorial triangulation data structure's is_valid() test returns true and if some geometric tests are passed with success. It is checked that the orientation of each finite full cell is positive and that the orientation of each infinite full cell is consistent with their finite adjacent full cells. The verbose parameter is not used. locate returns a default constructed Full_cell_handle(). If the point query lies in the interior of a bounded (finite) full cell of tr, the latter full cell is returned. If query lies on the boundary of some finite full cells, one of the cells is returned. Let \( d=\) tr. current_dimension(). If the point query lies outside the convex hull of the points, an infinite full cell with vertices \( \{ p_1, p_2, \ldots, p_d, \infty\}\) is returned such that the full cell \( (p_1, p_2, \ldots, p_d, query)\) is positively oriented (the rest of the triangulation lies on the other side of facet \( (p_1, p_2, \ldots, p_d)\)). loc_type is set to OUTSIDE_AFFINE_HULL, and locate returns Full_cell_handle(). If the query point lies inside the affine hull of the points, the function finds the \( k\)-face that contains query in its relative interior (if the \( k\)-face is finite, it is unique) and the result is returned as follows: loc_typeis set to ON_VERTEX, fis set to the vertex vthe querylies on and a full cell having vas a vertex is returned. c.current_dimension()-1 loc_typeis set to IN_FACE, fis set to the unique finite face containing the querypoint. A full cell having fon its boundary is returned. c.current_dimension()-1 loc_typeis set to IN_FACET, ftis set to one of the two representation of the finite facet containing the querypoint. The full cell of ftis returned. c.current_dimension() querypoint lies outside the convex hull of the points in the triangulation, then loc_typeis set to OUTSIDE_CONVEX_HULLand a full cell is returned as in the locatemethod above. If the querypoint lies inside the convex hull of the points in the triangulation, then loc_typeis set to IN_FULL_CELLand the unique full cell containing the querypoint is returned. Same as above but hint, the starting place for the search, is a vertex. The parameter hint is ignored if it is a default constructed Vertex_handle(). Reads the underlying combinatorial triangulation from is by calling the corresponding input operator of the triangulation data structure class (note that the infinite vertex is numbered 0), and the non-combinatorial information by calling the corresponding input operators of the vertex and the full cell classes (such as point coordinates), which are provided by overloading the stream operators of the vertex and full cell types. Assigns the resulting triangulation to t.
https://doc.cgal.org/4.12/Triangulation/classCGAL_1_1Triangulation.html
CC-MAIN-2022-27
en
refinedweb
A HTTP mocking framework written in Swift. 😎 Painless API mocking: Shock lets you quickly and painlessly provide mock responses for web requests made by your apps. 🧪 Isolated mocking: When used with UI tests, Shock runs its server within the UI test process and stores all its responses within the UI tests target - so there is no need to pollute your app target with lots of test data and logic. ⭐️ Shock now supports parallel UI testing!: Shock can run isolated servers in parallel test processes. See below for more details! 🔌 Shock can now host a basic socket: In addition to an HTTP server, Shock can also host a socket server for a variety of testing tasks. See below for more details! Add the following to your podfile: pod 'Shock', '~> x.y.z' You can find the latest version on cocoapods.org Copy the URL for this repo, and add the package in your project settings. Shock aims to provide a simple interface for setting up your mocks. Take the example below: class HappyPathTests: XCTestCase { var mockServer: MockServer! override func setUp() { super.setUp() mockServer = MockServer(port: 6789, bundle: Bundle(for: type(of: self))) mockServer.start() } override func tearDown() { mockServer.stop() super.tearDown() } func testExample() { let route: MockHTTPRoute = .simple( method: .get, urlPath: "/my/api/endpoint", code: 200, filename: "my-test-data.json" ) mockServer.setup(route: route) /* ... Your test code ... */ } } Bear in mind that you will need to replace your API endpoint hostname with 'localhost' and the port you specify in the setup method during test runs. e.g.:{PORT}/my/api/endpoint In the case or UI tests, this is most quickly accomplished by passing a launch argument to your app that indicates which endpoint to use. For example: let args = ProcessInfo.processInfo.arguments let isRunningUITests = args.contains("UITests") let port = args["MockServerPort"] if isRunningUITests { apiConfiguration.setHostname(":\(port)/") } Note: 👉 The easiest way to pass arguments from your test cases to your running app is to use another of our wonderful open-source libraries: AutomationTools Shock provides different types of mock routes for different circumstances. A simple mock is the preferred way of defining a mock route. It responds with the contents of a JSON file in the test bundle, provided as a filename to the mock declaration like so: let route: MockHTTPRoute = .simple( method: .get, urlPath: "/my/api/endpoint", code: 200, filename: "my-test-data.json" ) A custom mock allows further customisation of your route definition including the addition of query string parameters and HTTP headers. This gives you more control over the finer details of the requests you want your mock to handle. Custom routes will try to strictly match your query and header definitions so ensure that you add custom routes for all variations of these values. let route = MockHTTPRoute = .custom( method: .get, urlPath: "/my/api/endpoint", query: ["queryKey": "queryValue"], headers: ["X-Custom-Header": "custom-header-value"], code: 200, filename: "my-test-data.json" ) Sometimes we simply want our mock to redirect to another URL. The redirect mock allows you to return a 301 redirect to another URL or endpoint. let route: MockHTTPRoute = .redirect(urlPath: "/source", destination: "/destination") A templated mock allows you to build a mock response for a request at runtime. It uses Mustache to allow values to be built in to your responses when you setup your mocks. For example, you might want a response to contain an array of items that is variable size based on the requirements of the test. /template route in the Shock Route Tester example app for a more comprehensive example. let route = MockHTTPRoute = .template( method: .get, urlPath: "/template", code: 200, filename: "my-templated-data.json", data: [ "list": ["Item #1", "Item #2"], "text": "text" ]) ) A collection route contains an array of other mock routes. It is simply a container for storing and organising routes for different tests. In general, if your test uses more than one route Collection routes are added recursively, so a given collection route can be included in another collection route safely. let firstRoute: MockHTTPRoute = .simple(method: .get, urlPath: "/route1", code: 200, filename: "data1.json") let secondRoute: MockHTTPRoute = .simple(method: .get, urlPath: "/route2", code: 200, filename: "data2.json") let collectionRoute: MockHTTPRoute = .collection(routes: [ firstRoute, secondRoute ]) A timeout route is useful for testing client timeout code paths. It simply waits a configurable amount of seconds (defaulting to 120 seconds). Note if you do specify your own timeout, please make sure it exceeds your client's timeout. let route: MockHTTPRoute = .timeout(method: .get, urlPath: "/timeouttest") let route: MockHTTPRoute = .timeout(method: .get, urlPath: "/timeouttest", timeoutInSeconds: 5) In some case you might prefer to have all the calls to be mocked so that the tests can reliably run without internet connection. You can force this behaviour like so: server.shouldSendNotFoundForMissingRoutes = true This will send a 404 status code with an empty response body for any unrecognised paths. Shock now support middleware! Middleware lets you use custom logic to handle a given request. The simplest way to use middleware is to add an instance of ClosureMiddleware to the server. For example: let myMiddleware = ClosureMiddleware { request, response, next in if request.headers["X-Question"] == "Can I have a cup of tea?" { response.headers["X-Answer"] = "Yes, you can!" } next() } mockServer.add(middleware: myMiddleware) The above will look for a request header named X-Question and, if it is present with the expected value, it will send back an answer in the 'X-Answer' response header. Mock routes and middleware work fine together but there are a few things worth bearing in mind: For middleware such as the example above, the order of middleware won't matter. However, if you are making changes to a part of the response that was already set by the mock routes middleware, you may get unexpected results! Shock can now host a socket server in addition to the HTTP server. This is useful for cases where you need to mock HTTP requests and a socket server. The Socket server uses familiar terminology to the HTTP server, so it has inherited the term "route" to refer to a type of socket data handler. The API is similar to the HTTP API in that you need to create a MockServerRoute, call setupSocket with the route and when server start is called a socket will be setup with your route (assuming at least one route is registered). If no MockServerRoutes are setup, the socket server is not started. The socket server can only be hosted in addition to the HTTP server, as such Shock will need a port range of at least two ports, using the init method that takes a range. let range: ClosedRange<Int> = 10000...10010 let server = MockServer(portRange: range, bundle: ...) There is only one route currently available for the socket server and that is logStashEcho. This route will setup a socket that accepts messages being logged to Logstash and echo them back as strings. Here is an example of using logStashEcho with our JustTrack framework. import JustLog import Shock let server = MockServer(portRange: 9090...9099, bundle: ...) let route = MockSocketRoute.logStashEcho { (log) in print("Received \(log)" } server.setupSocket(route: route) server.start() let logger = Logger.shared logger.logstashHost = "localhost" logger.logstashPort = UInt16(server.selectedSocketPort) logger.enableLogstashLogging = true logger.allowUntrustedServer = true logger.setup() logger.info("Hello world!") It's worth noting that Shock is an untrusted server, so the logger.allowUntrustedServer = true is necessary. The Shock Route Tester example app lets you try out the different route types. Edit the MyRoutes.swift file to add your own and test them in the app. Shock is available under Apache License 2.0. See the LICENSE file for more info. Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics
https://swiftpack.co/package/justeat/Shock
CC-MAIN-2022-27
en
refinedweb
Testing Automation Scripts with the new Maximo 7.6 “Testscript” method (Part 1: MBO based scripts) Maximo introduced with Version 7.6 a new feature which allows you to test your automation scripts in context of a new or an existing Mbo as well as in context of the Maximo Integration Framework (MIF). The downside of that new function is, that I currently have not found any good documentation and that some features like the object path syntax are not self explaining. In this two part series I would like to introduce the new “Testscript” feature and explain how easy it is to use for your daily script testing. In part 1 we will cover the test of scripts in context of a Mbo and in part two I will show you how to test in context of the MIF. The old styled “Run Script” testing is no longer visible but can be enabled again using the trick in my other post. The first thing to mention if you want to test a script with the new Mbo-Test functionality is, that you need to have a script with an Object Launchpoint. Scripts with attribute launchpoints are not tested, or even worse if you have both: an object launch point and a attribute launch point on the same Mbo always the object script runs, even you select the attribute launch point script! (Might be confusing!). On the the other hand side you could utilize an Object Launchpoint testscript to set a certain attribute in a Mbo which then triggers the attribute launchpoint as well 😉 Now lets create a very simple script with an object launch point for the ASSET object like the following: print "Hello World" print mbo.getString("ASSETNUM") Press the “Test Script” button and you will see the following dialog: At the top you will see information about the script and the selected Launchpoint we are running on. With 1. you will select if we want the script to be tested against a newly created object or and existing object. In 2. an object path can be specified if we want to reference an existing object. The format I currently found out is: <OBJECT>[<SQL-WHERE>] Examples could be: ASSET[ASSETNUM='11200'] ASSET[ASSETNUM like '11%'] ASSET[ISRUNNING = 1] ITEM[ITEMNUM='0815'] Important to remember, that you always get only a single resulting record to your script. This is the default behavior for an object script, where the resulting set is stored in the implicit Launchpoint variable mbo. If you select Existing Object and specify an Object Path (remember to copy the Object Path to the clipboard – you have to reenter it for every test!) you can press the Test button. You might see a result as follows: - Data contains the resulting MBO in XML format. - Log contains the output of the Print statements of the script. With the Set attribute values section you can specify attributes which are overwritten from the original result. This is a nice feature when you need some testing data with certain specification (e.G. We need an asset in status of not running (ISRUNNING = 0), so we just need to specify: So far we just have discussed the Existing Object path. If you like to create a New Object this also can be done with the testing function. The testing function basically calls an mboSet.addAtEnd() function to append a new record to the given MboSet. With the usage of Set attribute values you can predefine fields of the newly created Mbo before it is handed over to the Jython script. A bit strange is, that if you try to create an Asset Object and do not specify an ASSETNUM you will get an error, that the asset field needs to be specified. If you will set the ASSETNUM field you will get an error, that it is readonly and cannot be set. The only solution I found so far is to hardly overwrite the readonly check by using the Field Flag “NOACCESSCHECK”: from psdi.mbo import MboConstants mbo.setValue("ASSETNUM", "ASS0815", MboConstants.NOACCESSCHECK ) mbo.setValue("DESCRIPTION", "New Test Asset!") So far for this first tutorial on the new Test script capability. In the next part I will cover the capability to test automation scripts customizing the MIF Interface. We are using Maximo CD 7.6 and there is no “Test Script” button or signature in the AUTOSCRIPT application. Is this an OOTB funtionality ? Basically this is a OOTB functionality introduced in Maximo 7.6. Not sure if it has been introduced by one of the fix packs. In 7.6.0.7 it is definitely included. Recently, I installed Maximo 7.6.0.0 and upgraded it to 7.6.0.8 version with Utility and Spatial Add on. 1.When I created a basic Object level script to set a description for Asset and Workorder,in both cases I couldn’t see the changes on UI and even logs are not getting printed. 2.For Jobplan ,Object level script is working.Tried Woactivity and Jobplan attribute launch points,they worked. Any reason why it is happening.Moreover,in test functionality process log doesn’t show print statements result. Thanks in Advance!! UI changes are always hard to archive with automation scripting since you have no control of ui. For the logging issue have you tried to use the logging command as shown in this articel? Maybe you can help When I run this: import sys print sys.path if sys.path.count(‘__pyclasspath__/Lib’) == 1 : print ‘path to /Lib already exists’ else : print ‘extend path to /Lib’ sys.path.append(‘__pyclasspath__/Lib’) import socket I get this: Traceback (most recent call last): File “”, line 10, in File “__pyclasspath__/Lib/socket.py”, line 11, in File “__pyclasspath__/Lib/string.py”, line 122, in File “__pyclasspath__/Lib/string.py”, line 115, in __init__ AttributeError: type object ‘re’ has no attribute ‘escape’ do you have any input on what might be the cause?
https://www.maximoscripting.com/testing-automation-scripts-with-the-new-maximo-7-6-testscript-method-part-1-mbo-based-scripts/?replytocom=69
CC-MAIN-2022-27
en
refinedweb
current position:Home>Python Basics: from variables to exception handling Python Basics: from variables to exception handling 2022-01-30 15:36:50 【chaoyu】 brief introduction Python It's a universal programming language , It is widely used in scientific computing and machine learning . python Operation mode - shell Interactive interpreter - File mode , Extension py Install third-party plug-in package pip install [pkgName] Copy code Output :print function grammar : print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False) Copy code - Format and output the object as a string to the stream file object file in . All non keyword parameters are pressed str() Method to convert to string output ; - Key parameters sep It's the implementation separator , For example, when multiple parameters are output, you want to output the middle separator character ; - Key parameters end Is the character at the end of the output , The default is line break \n; - Key parameters file Is the file that defines the output of the stream , Can be standard system output sys.stdout, It can also be redefined as another file ; - Key parameters flush Is to output the content to the stream file immediately , No caching . 【 Example 】 Without parameters , Each output will wrap . shoplist = ['apple', 'mango', 'carrot', 'banana'] print("This is printed without 'end'and 'sep'.") for item in shoplist: print(item) # This is printed without 'end'and 'sep'. # apple # mango # carrot # banana shoplist = ['apple', 'mango', 'carrot', 'banana'] print("This is printed with 'end='&''.") for item in shoplist: print(item, end='&') print('hello world') # This is printed with 'end='&''. # apple&mango&carrot&banana&hello world shoplist = ['apple', 'mango', 'carrot', 'banana'] print("This is printed with 'sep='&''.") for item in shoplist: print(item, 'another string', sep='&') # This is printed with 'sep='&''. # apple&another string # mango&another string # carrot&another string # banana&another string Copy code Input : input function input() The returned type is string type price = input(' Please enter the price :') price type(price) # <class 'str'> Copy code style - Single-line comments # # This is a comment print("Hello world") Copy code - Multiline comment ''' This is a multiline comment , Use three single quotes This is a multiline comment , Use three single quotes This is a multiline comment , Use three single quotes ''' print("Hello china") Copy code - Line continuation \ if signal == 'red' and\ car == 'moving': car='stop' # Equate to if signal == 'red' and car == 'moving': car='stop' Copy code There are two situations in which a line can be wrapped directly without line continuation - parentheses 、 brackets 、 The inside of curly braces can be written in multiple lines - The string under three quotation marks can be written across lines print('''hi everybody, welcome to python, what is your name?''') Copy code - One line, many sentences use ; x='Today';y='is';z='Thursday';print(x,y,z) Copy code - Indent Python Use the same indentation to represent the same level statement block Increasing the indent indicates the beginning of the statement block Reducing the indent indicates the exit of the statement block Operator 【 Example 】 print(1 + 1) # 2 print(2 - 1) # 1 print(3 * 4) # 12 print(3 / 4) # 0.75 print(3 // 4) # 0 print(3 % 4) # 3 print(2 ** 3) # 8 Copy code - Comparison operator 【 Example 】 print(2 > 1) # True print(2 >= 4) # False print(1 < 2) # True print(5 <= 2) # False print(3 == 4) # False print(3 != 5) # True Copy code - Logical operators 【 Example 】 print((3 > 2) and (3 < 5)) # True print((1 > 3) or (9 < 2)) # False print(not (2 > 1)) # False Copy code - An operator 【 Example 】 About binary operations , See “ An operation ” Part of the explanation . print(bin(4)) # 0b100 print(bin(5)) # 0b101 print(bin(~4), ~4) # -0b101 -5 print(bin(4 & 5), 4 & 5) # 0b100 4 print(bin(4 | 5), 4 | 5) # 0b101 5 print(bin(4 ^ 5), 4 ^ 5) # 0b1 1 print(bin(4 << 2), 4 << 2) # 0b10000 16 print(bin(4 >> 2), 4 >> 2) # 0b1 1 Copy code - Ternary operator With the conditional expression of this ternary operator , You can use one statement to complete the above conditional judgment and assignment operations . 【 Example 】 x, y = 4, 5 small = x if x < y else y print(small) # 4 Copy code - Other operators 【 Example 】 letters = ['A', 'B', 'C'] if 'A' in letters: print('A' + ' exists') if 'h' not in letters: print('h' + ' not exists') # A exists # h not exists Copy code 【 Example 】 Both variables in the comparison point to immutable types . a = "hello" b = "hello" print(a is b, a == b) # True True print(a is not b, a != b) # False False Copy code 【 Example 】 Both variables in the comparison point to variable types . a = ["hello"] b = ["hello"] print(a is b, a == b) # False True print(a is not b, a != b) # True False Copy code Be careful : is, is not Compare the memory addresses of the two variables ==, != You compare the values of two variables Compare the two variables , It points to the immutable type of address (str etc. ), that is,is not and ==,!= It's completely equivalent . Compare the two variables , It points to a variable address type (list,dict,tuple etc. ), There is a difference between the two . Operator precedence 【 Example 】 print(-3 ** 2) # -9 print(3 ** -2) # 0.1111111111111111 print(1 << 3 + 2 & 7) # 0 print(-3 * 2 + 5 / -2 - 4) # -12.5 print(3 < 4 and 4 < 5) # True Copy code Variables and assignments - Before using variables , It needs to be assigned first . - Variable names can include letters 、 Numbers 、 Underline 、 But the variable name cannot start with a number . - Python Variable names are case sensitive ,foo != Foo. 【 Example 】 teacher = " Old horse's procedural life " print(teacher) # Old horse's procedural life first = 2 second = 3 third = first + second print(third) # 5 myTeacher = " Old horse's procedural life " yourTeacher = " Pony's procedural life " ourTeacher = myTeacher + ',' + yourTeacher print(ourTeacher) # Old horse's procedural life , Pony's procedural life Copy code Data types and transformations - Basic types : integer 、 floating-point 、 Boolean type - Container type : character string 、 Tuples 、 list 、 Dictionaries and collections - integer python 2 Support adding after plastic surgery “L” Long integer 【 Example 】 adopt print() It can be seen that a Value , And classes (class) yes int. a = 1031 print(a, type(a)) # 1031 <class 'int'> Copy code Python All things in it are objects (object), Integer is no exception , As long as it's an object , There are corresponding properties (attributes) And methods (methods). 【 Example 】 Find a binary representation of an integer , Then return its length . a = 1031 print(bin(a)) # 0b10000000111 print(a.bit_length()) # 11 Copy code - floating-point 【 Example 】 print(1, type(1)) # 1 <class 'int'> print(1., type(1.)) # 1.0 <class 'float'> a = 0.00000023 b = 2.3e-7 print(a) # 2.3e-07 print(b) # 2.3e-07 Copy code Sometimes we want to keep floating point numbers after the decimal point n position . It can be used decimalIn the bag DecimalObjects and getcontext()Method to implement . - Boolean type Boolean (boolean) Type variable can only take two values ,True and False. When Boolean variables are used in numerical operations , use 1 and 0 representative True and False. 【 Example 】 print(True + True) # 2 print(True + False) # 1 print(True * False) # 0 Copy code In addition to assigning values directly to variables True and False, You can also use bool(X) To create variables 【 Example 】bool Acting on basic type variables :X As long as it's not an integer 0、 floating-point 0.0,bool(X) Namely True, The rest is False. print(type(0), bool(0), bool(1)) # <class 'int'> False True print(type(10.31), bool(0.00), bool(10.31)) # <class 'float'> False True print(type(True), bool(False), bool(True)) # <class 'bool'> False True Copy code determine bool(X) The value of is True still False, Just look X Is it empty , Empty words are False, If it's not empty, it's True. - For numerical variables ,0, 0.0 Can be considered empty . - For container variables , If there is no element in it, it is empty . Get type information Get type information type(object) 【 Example 】 print(isinstance(1, int)) # True print(isinstance(5.2, float)) # True print(isinstance(True, bool)) # True print(isinstance('5.2', str)) # True Copy code notes : - type() A subclass is not considered a superclass type , Do not consider inheritance relationships . - isinstance() Think of a subclass as a superclass type , Consider inheritance relationships . It is recommended if you want to determine whether two types are the same isinstance(). Type conversion Convert to integer int(x, base=10) Convert to string str(object='') Convert to floating point float(x) 【 Example 】 print(int('520')) # 520 print(int(520.52)) # 520 print(float('520.52')) # 520.52 print(float(520)) # 520.0 print(str(10 + 10)) # 20 print(str(10.1 + 5.2)) # 15.3 Copy code An operation - Original code 、 Inverse and complement Binary has three different representations : Original code 、 Inverse and complement , Complement code is used inside the computer to express . Original code : Is its binary representation ( Be careful , There is a sign bit ). 00 00 00 11 -> 3 10 00 00 11 -> -3 Copy code Inverse code : The inverse of a positive number is the original code , The inverse of a negative number is a sign bit invariant , The rest of the bits are reversed ( The corresponding positive number is negated by bit ). 00 00 00 11 -> 3 11 11 11 00 -> -3 Copy code Complement code : The complement of a positive number is the original code , The complement of a negative number is the inverse +1. 00 00 00 11 -> 3 11 11 11 01 -> -3 Copy code Sign bit : The highest bit is the sign bit ,0 It means a positive number ,1 A negative number . In the in place operation, the symbol bit also participates in the operation . Bitwise operation - Bitwise non operation ~ ~ 1 = 0 ~ 0 = 1 Copy code ~ hold num In the complement of 0 and 1 All reversed (0 Turn into 1,1 Turn into 0) The sign bit of a signed integer is ~ In the operation, it will also take negation . - Press bit and operate & 1 & 1 = 1 1 & 0 = 0 0 & 1 = 0 0 & 0 = 0 Copy code Only two corresponding bits are 1 Only when 1 - To press or operate | 1 | 1 = 1 1 | 0 = 1 0 | 1 = 1 0 | 0 = 0 Copy code As long as one of the two corresponding bits 1 When it comes to time 1 - Operate by bitwise exclusive or ^ 1 ^ 1 = 0 1 ^ 0 = 1 0 ^ 1 = 1 0 ^ 0 = 0 Copy code It is only when two corresponding bits are different 1 Nature of XOR operation : Satisfy the law of exchange and the law of union - Shift left by bit operation << num << i take num The binary representation of moves to the left i The value obtained by bit . 00 00 10 11 -> 11 11 << 3 --- 01 01 10 00 -> 88 Copy code - Shift right by bit operation >> num >> i take num The binary representation of moves to the right i The value obtained by bit . 00 00 10 11 -> 11 11 >> 2 --- 00 00 00 10 -> 2 Copy code Using bit operation to realize fast calculation adopt <<,>> Fast calculation 2 The multiple problem of . n << 1 -> Calculation n*2 n >> 1 -> Calculation n/2, Negative odd operations are not available n << m -> Calculation n*(2^m), That's multiplied by 2 Of m Power n >> m -> Calculation n/(2^m), Divided by 2 Of m Power 1 << n -> 2^n Copy code adopt ^ Quickly swap two integers . adopt ^ Quickly swap two integers . a ^= b b ^= a a ^= b Copy code adopt a & (-a) Get... Quickly a The last is 1 Integer of position . 00 00 01 01 -> 5 & 11 11 10 11 -> -5 --- 00 00 00 01 -> 1 00 00 11 10 -> 14 & 11 11 00 10 -> -14 --- 00 00 00 10 -> 2 Copy code Using bit operation to realize integer set The binary representation of a number can be regarded as a set (0 Not in the set ,1 In a set ). For example, a collection {1, 3, 4, 8}, It can be expressed as 01 00 01 10 10 The corresponding bit operation can also be regarded as the operation on the set . Operation of elements and collections : a | (1<<i) -> hold i Insert into collection a & ~(1<<i) -> hold i Remove from collection a & (1<<i) -> Judge i Whether it belongs to the collection ( Zero does not belong to , Nonzero belongs to ) Copy code Operations between collections : a repair -> ~a a hand over b -> a & b a and b -> a | b a Bad b -> a & (~b) Copy code Be careful : Integers exist in memory in the form of complements , Naturally, the output is also output according to the complement . print(bin(3)) # 0b11 print(bin(-3)) # -0b11 print(bin(-3 & 0xffffffff)) # 0b11111111111111111111111111111101 print(bin(0xfffffffd)) # 0b11111111111111111111111111111101 print(0xfffffffd) # 4294967293 Copy code Isn't it subversive , We can see from the result that : - Python in bin A negative number ( Decimal means ), The output is the binary representation of its original code plus a minus sign , crater . - Python Integers in are stored as complements . - Python The integer is unlimited in length and will not overflow . So in order to get negative numbers ( Decimal means ) Complement , You need to manually combine it with a hexadecimal number 0xffffffff Do bit and operation , Give it back bin() For the output , What you get is the complement of a negative number . Conditional statements if sentence grammar : if expression: expr_true_suit Copy code - expression: Conditional expression - expr_true_suit: expression Condition is true Execute code block , It has to be indented if - else sentence if expression: expr_true_suit else expr_false_suit Copy code - expr_false_suit: expression Condition is false Execute code block , It has to be indented , elseDon't indent if - elif - else sentence grammar : if expression : expr_true_suite elif expression2: expr2_true_suite : : elif expressionN : exprN_true_suite else: none_of_the_above_suite Copy code example temp = input(' Please enter the grade :') source = int(temp) if 100 >= source >= 90: print('A') elif 90 > source >= 80: print('B') elif 80 > source >= 60: print('C') elif 60 > source >= 0: print('D') else: print(' Input error !') Copy code assert key word assert We call this key word “ Assertion ”, When the following condition of the keyword is False when , The program automatically crashes and throws AssertionError It's abnormal . my_list = ['lsgogroup'] my_list.pop(0) assert len(my_list) > 0 # AssertionError Copy code Loop statement while loop grammar : while expression: suite_to_repeat Copy code example : string = 'abcd' while string: print(string) string = string[1:] Copy code while - else loop grammar while Boolean expression : Code block else: Code block Copy code for loop grammar for Iterative variable in Iteratable object : Code block Copy code example dic = {'a': 1, 'b': 2, 'c': 3, 'd': 4} for key, value in dic.items(): print(key, value, sep=':', end=' ') # a:1 b:2 c:3 d:4 Copy code for - else loop grammar for Iterative variable in Iteratable object : Code block else: Code block Copy code example for num in range(10, 20): # iteration 10 To 20 Number between for i in range(2, num): # Iterate over the factors if num % i == 0: # Determine the first factor j = num / i # Calculate the second factor print('%d be equal to %d * %d' % (num, i, j)) break # Jump out of current loop else: # Cyclic else part print(num, ' It's a prime number ') # 10 be equal to 2 * 5 # 11 It's a prime number # 12 be equal to 2 * 6 # 13 It's a prime number # 14 be equal to 2 * 7 # 15 be equal to 3 * 5 # 16 be equal to 2 * 8 # 17 It's a prime number # 18 be equal to 2 * 9 # 19 It's a prime number Copy code range() function grammar range([start,] stop[, step=1]) Copy code - This BIF(Built-in functions) There are three parameters , The two parameters enclosed in brackets indicate that the two parameters are optional . - step=1 Indicates that the default value for the third parameter is 1. - range This BIF The function of the is to generate a from start The value of the parameter starts to stop The value of the parameter ends with a sequence of numbers , The sequence contains start But does not contain stop Value . example for i in range(2, 9): # It doesn't contain 9 print(i) # 2 # 3 # 4 # 5 # 6 # 7 # 8 Copy code enumerate() function grammar enumerate(sequence, [start=0]) Copy code - sequence: A sequence 、 Iterators or other objects that support iteration . - start: Subscript start position . - return enumerate( enumeration ) object example seasons = ['Spring', 'Summer', 'Fall', 'Winter'] lst = list(enumerate(seasons)) print(lst) # [(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')] lst = list(enumerate(seasons, start=1)) # Subscript from 1 Start print(lst) # [(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')] Copy code - enumerate() And for Combined use of recycling . for i, a in enumerate(A) do something with a Copy code use enumerate(A) Not only did it return A The elements in , By the way, give the element an index value ( The default from the 0 Start ). Besides , use enumerate(A, j) It can also be determined that the initial value caused by the cable is j. example languages = ['Python', 'R', 'Matlab', 'C++'] for language in languages: print('I love', language) print('Done!') # I love Python # I love R # I love Matlab # I love C++ # Done! for i, language in enumerate(languages, 2): print(i, 'I love', language) print('Done!') # 2 I love Python # 3 I love R # 4 I love Matlab # 5 I love C++ # Done! Copy code break sentence break Statement can jump out of the loop of the current layer continue sentence continue Terminate this cycle and start the next cycle pass sentence pass The meaning of the sentence is “ Not doing anything ”, If you don't write any sentences where you need them , Then the interpreter will prompt an error , and pass Sentences are used to solve these problems def a_func(): pass Copy code pass It's an empty statement , Do nothing , It only plays a role of occupying , Its purpose is to maintain the integrity of the program structure . Even though pass Statement does nothing , But if you're not sure what kind of code to put in one place , You can put a pass sentence , Let the code work . The derived type - List derivation [ expr for value in collection [if condition] ] Copy code example x = [-4, -2, 0, 2, 4] y = [a * 2 for a in x] print(y) # [-8, -4, 0, 4, 8] Copy code - Tuple derivation ( expr for value in collection [if condition] ) Copy code example a = (x for x in range(10)) print(a) # <generator object <genexpr> at 0x0000025BE511CC48> print(tuple(a)) # (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) Copy code - Dictionary derivation { key_expr: value_expr for value in collection [if condition] } Copy code example b = {i: i % 2 == 0 for i in range(10) if i % 3 == 0} print(b) # {0: True, 3: False, 6: True, 9: False} Copy code - Set derivation { expr for value in collection [if condition] } Copy code example c = {i for i in [1, 2, 3, 4, 5, 5, 6, 4, 3, 2, 1]} print(c) # {1, 2, 3, 4, 5, 6} Copy code - Other next(iterator[, default]) Copy code example e = (i for i in range(10)) print(e) # <generator object <genexpr> at 0x0000007A0B8D01B0> print(next(e)) # 0 print(next(e)) # 1 for each in e: print(each, end=' ') # 2 3 4 5 6 7 8 9 Copy code exception handling An exception is an error detected at run time . Computer language defines exception types for possible errors , When an error throws a corresponding exception , The exception handler will be started , So as to restore the normal operation of the program Python Summary of standard exceptions - BaseException: All abnormal Base class - Exception: Normal abnormal Base class - StandardError: All built-in standard exception base classes - ArithmeticError: The base class of all numerical computation anomalies - FloatingPointError: Floating point computation exception - OverflowError: The numerical operation exceeds the maximum limit - ZeroDivisionError: Divisor is zero. - AssertionError: Assertions (assert) Failure - AttributeError: Trying to access unknown object properties - EOFError: No built-in input , arrive EOF Mark - EnvironmentError: Base class for operating system exceptions - IOError: Input / Output operation failed - OSError: An exception generated by the operating system ( For example, open a file that doesn't exist ) - WindowsError: System call failed - ImportError: When the import module fails - KeyboardInterrupt: User interrupt execution - LookupError: Base class for invalid data query - IndexError: Index out of range of sequence - KeyError: Look up a nonexistent keyword in the dictionary - MemoryError: out of memory ( You can free memory by deleting objects ) - NameError: Try to access a variable that doesn't exist - UnboundLocalError: Accessing an uninitialized local variable - ReferenceError: A weak reference attempts to access an object that has been garbage collected - RuntimeError: General runtime exceptions - NotImplementedError: A method that has not yet been implemented - SyntaxError: An exception caused by a grammatical error - IndentationError: An exception caused by an indentation error - TabError:Tab Mixed with Spaces - SystemError: General interpreter system exception - TypeError: Invalid operation between different types - ValueError: Invalid parameter passed in - UnicodeError:Unicode Related anomalies - UnicodeDecodeError:Unicode Decoding exception - UnicodeEncodeError:Unicode Exception caused by coding error - UnicodeTranslateError:Unicode Exception caused by conversion error Python Standard warning summary - Warning: The base class for warnings - DeprecationWarning: A warning about abandoned features - FutureWarning: A warning about future semantic changes in construction - UserWarning: Warnings generated by user code - PendingDeprecationWarning: A warning that features will be discarded - RuntimeWarning: Suspicious runtime behavior (runtime behavior) Warning of - SyntaxWarning: Warning of suspicious grammar - ImportWarning: Used to trigger warnings during module import - UnicodeWarning: And Unicode Related warnings - BytesWarning: Warning related to byte or bytecode - ResourceWarning: Warnings related to resource usage try - except sentence grammar try: Detection range except Exception[as reason]: Exception handling code Copy code ry The statement works as follows : - First , perform try Clause ( In keywords try And keywords except The sentence between ) - If no exceptions occur , Ignore except Clause ,try End after Clause execution . - If in execution try An exception occurred during clause , that try The rest of the clause will be ignored . If the type of exception and except The following names match , So the corresponding except Clause will be executed . Finally, execute try - except Code after statement . - If an exception is not associated with anything except matching , Then this exception will be passed to the upper level try in . try: f = open('test.txt') print(f.read()) f.close() except OSError as error: print(' Error opening file \n as a result of :' + str(error)) # Error opening file # as a result of :[Errno 2] No such file or directory: 'test.txt' Copy code One try The statement may contain more than one except Clause , To handle different specific exceptions . At most one branch will be executed try: int("abc") s = 1 + '1' f = open('test.txt') print(f.read()) f.close() except OSError as error: print(' Error opening file \n as a result of :' + str(error)) except TypeError as error: print(' Type error \n as a result of :' + str(error)) except ValueError as error: print(' Value error \n as a result of :' + str(error)) # Value error # as a result of :invalid literal for int() with base 10: 'abc' Copy code try-except-else Statement attempts to query not in dict Key value pairs in , This raises an exception . This anomaly should accurately belong to KeyError, But because of KeyError yes LookupError Subclasses of , And will LookupError in KeyError Before , Therefore, the program gives priority to the execution of the except Code block . therefore , The use of multiple except When a code block , We must adhere to the standard ranking , From the most targeted exception to the most general exception . dict1 = {'a': 1, 'b': 2, 'v': 22} try: x = dict1['y'] except KeyError: print(' Key error ') except LookupError: print(' Query error ') else: print(x) # Key error Copy code One except Clause can handle multiple exceptions at the same time , These exceptions will be placed in parentheses as a tuple try: s = 1 + '1' int("abc") f = open('test.txt') print(f.read()) f.close() except (OSError, TypeError, ValueError) as error: print(' Something went wrong !\n as a result of :' + str(error)) # Something went wrong ! # as a result of :unsupported operand type(s) for +: 'int' and 'str' Copy code try - except - finally sentence try: Detection range except Exception[as reason]: Exception handling code finally: Code that will be executed anyway No matter try Is there any exception in the clause ,finally Clause will execute . example : If an exception is in try Thrown in Clause , And nothing except Stop it , So this exception will be in finally Clause is thrown after execution . def divide(x, y): try: result = x / y print("result is", result) except ZeroDivisionError: print("division by zero!") finally: print("executing finally clause") divide(2, 1) # result is 2.0 # executing finally clause divide(2, 0) # division by zero! # executing finally clause divide("2", "1") # executing finally clause # TypeError: unsupported operand type(s) for /: 'str' and 'str' Copy code try - except - else sentence If in try No exception occurred while clause was executing ,Python Will perform else Statement after statement . try: Detection range except: Exception handling code else: If there is no exception to execute this code Copy code Use except Without any exception types , This is not a good way , We can't identify specific exception information through this program , Because it catches all the exceptions . try: Detection range except(Exception1[, Exception2[,...ExceptionN]]]): One of the above exceptions , Execute this code else: If there is no exception to execute this code try: fh = open("testfile.txt", "w") fh.write(" This is a test file , For testing exceptions !!") except IOError: print("Error: File not found or failed to read ") else: print(" Content written to file successfully ") fh.close() # Content written to file successfully Copy code Be careful :else The statement must exist in except The existence of the statement is the premise , In the absence of except Of the statement try Use in statement else sentence , It can cause grammatical errors raise sentence raise Statement throws a specified exception try: raise NameError('HiThere') except NameError: print('An exception flew by!') # An exception flew by! Copy code author[chaoy
https://en.pythonmana.com/2022/01/202201301536471519.html
CC-MAIN-2022-27
en
refinedweb
Array Algorithm: Rotate an image An image representing as a 2D matrix. You can take some time to read the problem: You are given an n x n2D matrixrepresenting: >>IMAGE]] Constraints: matrix.length == n matrix[i].length == n 1 <= n <= 20 -1000 <= matrix[i][j] <= 1000 Solution We will try to rotate on the paper to find the rules on how it rotates. It looks like we just pull out 4 elements top, right, bottom, and left then swap its value. But how do it continuously? We can think of an iterator from 0 to the size of a row, each iteration just increase i variable straightforward, and the size of a row is equal to the size of the column, then we can play around with the i variable and the length of the row to reach all the matrix positions. But it is not enough, a matrix may have several layers based on its size, imagine a 3x3 Rubik, you rotate the outside layer but not the center cell, and for the 4x4 matrix, you have no center cell, 2 layers will be rotating. So we can come to the rule of the number of layers like layers = matrix length / 2. Now we play around with layer, size(length), and i (a row iterator) to reach all the matrix positions. Let resolve a subproblem first. var top = matrix[layer][i]; var right = matrix[i][length - layer]; var bot = matrix[length - layer][length -i]; var left = matrix[length - i][layer]; Look at the positions, - Top will be fixed the row, only the column moving by i - Right fixed by the column, only the row moving by i - Bottom fixed by the row, opposite of the top - Left fixed by the column, opposite of the right After pulling out all the four positions, we assign its new position // top = left matrix[layer][i] = left; // right = top matrix[i][length - layer] = top; // bot = right matrix[length - layer][length - i] = right; // left = bot matrix[length - i][layer] = bot; The complete code would be. public class Solution { public void Rotate(int[][] matrix) { var layers = matrix.Length / 2; var length = matrix.Length - 1; for(var layer = 0; layer < layers; layer++){ for(var i = layer; i < length - layer; i++){ var top = matrix[layer][i]; var right = matrix[i][length - layer]; var bot = matrix[length - layer][length -i]; var left = matrix[length - i][layer]; // top = left matrix[layer][i] = left; // right = top matrix[i][length - layer] = top; // bot = right matrix[length - layer][length - i] = right; // left = bot matrix[length - i][layer] = bot; } } } } The time complexity: O(n * n) as we reached all the values of matrix The space complexity is O(1) as we don’t have any extra space.
https://nguyennb.medium.com/array-algorithm-rotate-a-2d-matrix-4d1eb5ed1d77?source=read_next_recirc---------1---------------------6aaff6e4_41e6_4af9_b715_90d95c81cf3f-------
CC-MAIN-2022-27
en
refinedweb
Importing Import the custom properties before using them in your style sheets. import '@vaadin/vaadin-lumo-styles/style.js'; Clickable Cursor The way clickable items are indicated to users of pointer devices (typically a mouse) can be configured to suit your application’s target audience. You can either follow the “web” approach and use the pointer (hand) cursor for clickable items, or take the “desktop” approach and use the default (arrow) cursor.
https://vaadin.com/docs/latest/components/ds-resources/foundation/interaction
CC-MAIN-2022-27
en
refinedweb
go to bug id or search bugs for Description: ------------ Currently the internal PHP function library is, well, unique. Proposed is a means to maintain backward compatibility while rectifying the problem and hopefully on the engine side speed up the process by reducing what the zend engine has to track. import keyword. import attaches the specified namespace to the current one. An error is thrown if there is a symbol collision between the namespaces. namespace Bar { function Foo(){} } namespace Test { import \Bar Foo() // works Bar\Foo() // doesn't work - try use Bar instead. } If import is asked to pull a namespace that doesn't exist there's an error. Import has one other use. namespace Test; import @PDO; This pulls the PDO library of PHP 6 into the Test namespace. This brings us back to the solution of the function library problem. PHP 6 would ship with a legacy mode set to true. In that mode everything is as now - the namespace \ having thousands of functions. Turn it off though and namespace \ is empty except for a core selection of functions and objects. The rest of the function library will be divided into libraries that can be imported as needed. Legacy functions remain available in library legacy so namespace MyNamespace; import @Legacy; would let code in MyNamespace use the current function library. Add a Patch Add a Pull Request This is certainly a feature request which requires discussion on the internals@ mailing list, and likely the RFC process. For the time being, I'm suspending this ticket.
https://bugs.php.net/bug.php?id=53160
CC-MAIN-2022-27
en
refinedweb
Details Task - Status: Resolved Major - Resolution: Fixed - None - - None - Description The rationale behind this change is that many of the image specifications (e.g., Docker/Appc) are not just for filesystems. They also specify runtime configurations (e.g., environment variables, volumes, etc) for the container. Provisioner should return those runtime configurations to the Mesos containerizer and Mesos containerizer will delegate the isolation of those runtime configurations to the relevant isolator. Here is what it will be look like eventually. We could do those changes in phases: 1) Provisioner will return a ProvisionInfo which includes a 'rootfs' and image specific runtime configurations (could be the Docker/Appc manifest). 2) Then, the Mesos containerizer will generate a ContainerConfig (a protobuf which includes rootfs, sandbox, docker/appc manifest, similar to OCI's host independent config.json) and pass that to each isolator in 'prepare'. Imaging in the future, a DockerRuntimeIsolator takes the docker manifest from ContainerConfig and prepare the container. 3) The isolator's prepare function will return a ContainerLaunchInfo (contains environment variables, namespaces, etc.) which will be used by Mesos containerize to launch containers. Imaging that information will be passed to the launcher in the future. We can do the renaming (ContainerPrepareInfo -> ContainerLaunchInfo) later. Attachments Issue Links - blocks MESOS-4282 Update isolator prepare function to use ContainerLaunchInfo - Resolved - is depended upon by MESOS-4225 Exposed docker/appc image manifest to mesos containerizer. - Resolved
https://issues.apache.org/jira/browse/MESOS-4240
CC-MAIN-2022-27
en
refinedweb
generator-sagegenerator-sage A very simple Yeoman generator for WordPress starter theme sage. Getting StartedGetting Started npm install -g yo Install generator-sage npm install -g generator-sage Create a folder in your WordPress themes folder and initiate the generator mkdir theme-name && cd $_ yo sage Answer some questions in the prompt and you're done! To do:To do: - Handle bower.json, composer.json and package.json search & replace - Handle lang/sage.pot search & replace - Check that roots-wrapper-override doesn't use a namespace/variable/hook that we're changing here - Ask for soil modules and update lib/setup.php - If GA soil module is active, ask for Google Analytics and update lib/setup.php - Choose the frontend framework (bootstrap should not be the only choice) - Allow options to be passed from command line (and prompt only for the missing ones) - Setup a web page/endpoint that returns an archive of generated theme, using one of the following methods: - html5 form UI - direct http POST requests (so we could get generated themes with a simple curL request in other scripts)
https://www.npmjs.com/package/generator-sage
CC-MAIN-2022-27
en
refinedweb
Here we will write a simple program to print Fibonacci Series. In the Fibonacci Series, the next number is the sum of the previous two numbers. C# Program to Print Fibonacci Series In the Fibonacci Series, the next number is the sum of the previous two numbers. 0, 1, 1, 2, 3, 5, 8, 13, 21 etc using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; class Program { static void Main(string[] args) { int iFirst = 0, iSecond = 1, iThird, iCount, iInput; Console.Write("Enter the number of elements: "); iInput = int.Parse(Console.ReadLine()); Console.Write(iFirst + " " + iSecond + " "); //printing 0 and 1 for (iCount = 2; iCount < iInput; ++iCount) //loop starts from 2 because 0 and 1 are already printed { iThird = iFirst + iSecond; Console.Write(iThird + " "); iFirst = iSecond; iSecond = iThird; } Console.ReadKey(); } } Output: Enter the number of elements: 6 >0 1 1 2 3 5 View More: - C# Program to check whether a number is prime or not using Recursion. - C# Program to find the sum of all item of an Integer Array. - C# Program to find Largest Element in a Matrix. - C# Program to Swap two numbers without using third variable. Conclusion: I hope you would love this post. Please don’t hesitate to comment for any technical help. Your feedback and suggestions are welcome.
https://debugonweb.com/2018/11/02/c-program-print-fibonacci-series/
CC-MAIN-2019-09
en
refinedweb
For more videos from GraphConnect SF and to register for GraphConnect Europe, check out graphconnect.com. Patrick Chenzon: Docker is a tool that allows you to develop applications more quickly and productively. Its mission is “to build tools of mass innovation,” and it allows for the development of creative internet programming beyond silos. Cloud Services as Movies There are three big players in the public cloud market that have been adopted by developers: Amazon, Google and Microsoft. Also available are private cloud services such as VMware, which is entering the public market with vCloud Air, and Microsoft, which has a strong hybrid strategy with Azure and Azure Pack that is installed behind a firewall. To understand each of these cloud tools, movie comparisons serve a helpful purpose. VMware can be compared to the movie “300”; it is courageous and relies on great technology, but we all know how the movie ends. Amazon’s public cloud service can be compared to “Pacific Rim,” in which extraterrestrial monsters enter earth through a fold at the bottom of the ocean. In this case, they are “invading” the enterprise market. Google is like the movie “Back to the Future,” because trying to get people to adopt their service — which didn’t use a firewall — mirrored Marty’s experience playing rock and roll to a non-responsive crowd in the 1950s. They are too ahead of their time. Microsoft’s cloud service mirrors the movie “Field of Dreams” — “build it [a public cloud service] and they will come.” A Changing Cloud Landscape When Docker arrived two years ago, it changed the whole cloud landscape by providing a portable approach to cloud technology, along with a way to perform DevOps that doesn’t provide lock-in. With the introduction of this new technology, the whole industry reorganized itself around Docker. At the bottom of this new stack, you have the equivalent of hardware, which are cloud providers such as Amazon, Google and Microsoft. On top of that there are operating systems, all of which shrank starting with CoreOS, which built a small distribution of Linux that included only Docker and a few small managing clusters. Red Hat quickly followed suit with Project Atomic; Ubuntu with Ubuntu Core; and VMware with Photon. Rancher has even system services running in a privileged Docker service, allowing them to run System Docker and Userland Docker. In the next layer, there’s Docker, along with a whole ecosystem of plugins like Weaveworks for networking or ClusterHQ for volumes. Next, there are three main tools for orchestration: Docker Swarm, Apache Mesos and Google’s Kubernetes. GS is an interesting entrant; it’s like Heroku that you can install behind the firewall and it’s open source. Cloud Foundry is reinventing itself with Project Lattice as a Docker orchestration engine; IBM has the Bluemix platform, which includes Cloud Foundry and some Docker services; and Tutum is a software-as-a-service platform for orchestrating a container (Tutum was recently acquired by Docker). Delving Into Docker Docker is based on isolation using Linux kernel features, namespaces and cgroups. It also includes an image layer system that allows users to cache layers for created images. Build When a user develops an application in Docker, they create a Docker file – a simple declarative format that can inherit from an existing image. In this example, a Java application that is being built in Docker can inherit from Java 7, and copied code allows you to run Java C. You can do a Docker build with an image and then run it in the daemon. Applications based on microservices often have a number of different services, such as a Java front-end combined with a Neo4j database, for which you can use Docker Compose. This YAML declarative format allows you to specify the containers to run, go into the directory and input the code “Docker Compose up” to spin the containers and provide an activity log. Docker Machine allows you to provision VMs in any cloud, on any virtualization platform, as long as the Docker daemon is installed. We also use Kitematic, which is a user interface that works on both Mac and Windows that allows you to create and manage containers. Ship On the ship side, there is Docker Hub, which houses images and is where ops and devs operate together, and the Docker Trusted Registry, which is integrated with both LDAP and enterprise features that can be installed behind the firewall to manage projects. Run On the run side, there are a number of plugins for orchestration, including Docker Swarm. This tool allows for a daemon to be put in front of all the Docker engines within a specific cluster; communicate to the same API from a client back to Swarm; and place a workload wherever there is space based on constraints that are passed through environment variables. Docker recently performed some tests with Swarm that scaled to 1,000 nodes. The EC2 allowance maxed out, and now we are performing testing with 10,000 nodes to see how far it can go. Tutum lets you bring your own nodes from behind the firewall and allows you to do your own build, ship and run there. Docker recently announced a new tool, Project Orca, which is currently in private beta. It’s a solution that’s run behind a firewall that we are going to sell to enterprises for running and operating their containers. In terms of standards, last summer, we announced the Open Container Initiative (OCI). There are 35 companies that joined this effort to standardize the runtime and bundle format for containers. The reference implementation of the spec is called RunC, which is something that can be used in place of Docker, or if you need complete control over the creation of namespaces. In the Docker 1.9 RC, we now have Docker networks and volumes, which are powerful features for orchestrating containers. This is particularly helpful when running Neo4j Enterprise in a cluster. How to Dockerize Neo4j To expose a port, perform a Docker run minus z to launch it in daemon mode. Eventually you can map a directory on your local machine to the data directory in the container — where the data resides — and then launch Neo4j to connect. For an example of how to dockerize Neo4j, please watch the video clip below: For an example of how to use Docker Compose, please watch the video clip below: Azure Resource Manager David Makogon: We’re focusing on building all of our infrastructure — which is typically compute, storage and networking — in the cloud. The traditional approach to this is to spin up some VMs and a storage account and build out the network from there. Potentially this has a virtual network, with all or only some ports open to the outside world that then connect to the database and app resources inside the virtual network. This represents a lot of work, scripting and time. To speed up the process, we introduced the Azure Resource Manager. This allows you to create a single template that describes your infrastructure, storage account, network interface and public IP. The Azure Resource Manager also allows you to define virtual machines, chain them together and build dependencies. Whether you have one virtual machine or 1,000, Azure will spin it up into an atomic operation. On our virtual machines in Microsoft Azure, we have virtual machine extensions — including a Docker extension — that are used for monitoring and injecting code into running VMs. You can spin up a VM, activate the Docker extension inside your deployment script, inject a specific Docker file and pull down the Neo4j file, which shows all of your data. In Azure, you tie everything up with a single resource name. In the below there is a VM, an NIC (a network security group that allows you to specify which ports data comes in and out of), a public IP address, a VNET and a storage account. All of them are chained together in a single resource group with dependencies between each. There are a number of ways this can be launched. This includes a REST API call, a language wrapper (such as Java or .NET), command line tools (available in Windows and Mac) and through the web. Watch the Resource Manager launch demo video here or read a step-by-step outline below In the following example Azure Resource Manager script, there are parameters that allow users to dynamically choose a particular world region, enter a VM admin username and password, specify the machine type, etc. Once these parameters are set, you can start defining dependencies. This example virtual network is dependent on a network security group and specific subnets. Once the VMs are up and running, we can install the Docker extension and then hand off a Docker image to launch. In this case, it’s the Neo4j image. This can then be deployed to Azure, which will ask for the various parameters specified by the script programmer, which include the parameters mentioned above (VM name, size, etc.). Once everything has been specified, you can create a resource group, which is the name that houses all the different parameters. You then kick off the process of spinning up an entire cluster, which will run for a few minutes. This can also be done using the Azure command line tool. In this case, you can create a new umbrella resource group which — in this example — is the western U.S. Next, create a deployment by providing a resource group and deployment name. The below case shows passing a template URI to a GitHub repo. Then specify the storage account, location, admin username, password and DNS name. Now you can spin up the deployment via the command line. Below is an example of a deployment that has already been launched from the portal, the Hoverboard Resource Group. In Azure, navigate to a resource group and then enter the name of your deployment. This shows the exact same grouped resources from above (neodockerVM, neodockervmmyVMNic, etc.). It also creates a public IP address URL. When you paste the URL into your browser and specify the port (in this case, 7474), it pulls up Neo4j. Next, you can SSH into the VM you just created, which allows you to view the Docker logs. Now you are SSH’d into a VM and connected into the Docker container viewing the exact image that is also appearing in the browser. To recap, the cloud manages the infrastructure. The Azure Resource Manager strings together the pieces of the infrastructure, sets up the dependencies with an atomic operation and deploys the entire resource group together. Docker is then installed on top of those virtual machines, and we automatically inject Neo4j as part of the Docker extension. When that has been completed and spun up, we’re left with Neo4j running. Resources for Additional Learning To learn more about how to use Neo4j with Docker, please explore the following resources: Inspired by David and Patrick’s talk? Register for GraphConnect Europe on April 26, 2016 at for more industry-leading presentations and workshops on the evolving world of graph database technology. Explore: azure azure resource manager cloud container docker docker compose graphconnect microsoft azure neo4j virtual machine About the Author Patrick Chanezon & David Makogon , Docker & Microsoft Azure Patrick Chanezon is member of technical staff at Docker Inc. He helps to build Docker, an open platform for distributed applications for developers and sysadmins. Software developer and storyteller, he spent 10 years building platforms at Netscape and Sun, then 10 years evangelizing platforms at Google, VMware and Microsoft. His main professional interest is in building and kickstarting the network effect for these wondrous two-sided markets called platforms. He has worked on platforms for portals, ads, commerce, social, web, distributed apps and cloud. David Makogon has been a software creationist and architect for almost 30 years. He’s currently a Senior Azure Architect at Microsoft, and has the dubious title of World’s First Former Azure MVP. David spends a ton of energy working in the cloud, as well as NoSQL databases and polyglot persistence. Outside of computing, David is an avid photographer and family man, with a penchant for puns and an uncanny ability to read backwards. 1 Comment Upcoming Event Have a Graph Question? Reach out and connect with the Neo4j staff.Stack Overflow Community Forums Share your Graph Story? Cool!
https://neo4j.com/blog/neo4j-containers-docker-azure/
CC-MAIN-2019-09
en
refinedweb
import PackageDescriptionlet package = Package(name: "MySwiftProject") let myAwesomeString = "hey I am an awesome string"print(myAwesomeString)let awesomeInt = 500print(awesomeInt) After toggling the breakpoint, press run and then enter p myAwesomeString to print the object More tutorial: Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
https://atom.io/packages/swift-debugger
CC-MAIN-2019-09
en
refinedweb
Configuration Configuration is handled using the standard Elixir configuration. Simply add configuration to the :sentry key in the file config/prod.exs: config :sentry, dsn: "___PUBLIC_DSN___" If using an environment with Plug or Phoenix add the following to your router: use Plug.ErrorHandler use Sentry.Plug If you’d like to capture errors from separate processes like Task that may crash, add the line :ok = :error_logger.add_report_handler(Sentry.Logger) to your application’s start function: def start(_type, _opts) do children = [ supervisor(Task.Supervisor, [[name: Sentry.TaskSupervisor]]), :hackney_pool.child_spec(Sentry.Client.hackney_pool_name(), [timeout: Config.hackney_timeout(), max_connections: Config.max_hackney_connections()]) ] opts = [strategy: :one_for_one, name: Sentry.Supervisor] :ok = :error_logger.add_report_handler(Sentry.Logger) Supervisor.start_link(children, opts)!
https://docs.sentry.io/clients/elixir/config/
CC-MAIN-2019-09
en
refinedweb