text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
jackd2 1.9.5~dfsg-17 source package in Ubuntu
Changelog
jackd2 (1.9.5~dfsg-17) unstable; urgency=low * Augment sigsegv.c patch to support SH4 CPUs (Closes: #588124) * Make RTPRIO handling more robust (allow users to delete audio.conf) jackd2 (1.9.5~dfsg-16) experimental; urgency=low * reupload to experimental jackd2 (1.9.5~dfsg-15) unstable; urgency=low [ Adrian Knoth ] * Also provide the shlibs file for libjack-jackd2-0 * Fix FTBFS on sparc64 (Closes: #586257) [ Reinhard Tartler ] * jackd must not be a virtual package, use 'jack-daemon' for that * add breaks/replaces on old libjack0 * change shlibsfile to prefer jackd2's libjack * use conflicts instead of breaks. libjack-jackd2-0 has file conflicts with libjack0 and will keep it [ Jonas Smedegaard ] * Update control file. jackd2 (1.9.5~dfsg-14) unstable; urgency=low * Rename source package to jackd2 * Don't provide libjack-dev anymore * Don't compile libjack0, use libjack-jackd2-0 instead * Move inprocess clients from libjack0 into jackd2 package * Update to upstream SVN revision 4024 jack-audio-connection-kit (1.9.5~dfsg-13) unstable; urgency=low * Fix FTBFS on hppa (Closes: #580824) jack-audio-connection-kit (1.9.5~dfsg-12) unstable; urgency=low * Add -fvisibility=hidden to build flags. * Fix FTBFS on hurd jack-audio-connection-kit (1.9.5~dfsg-11) unstable; urgency=low * Don't use Debian provided waf anymore. (Closes: #580363) * Drop doxygen build dependency. * Update to upstream SVN revision 4008. (uses semaphores on Linux now) jack-audio-connection-kit (1.9.5~dfsg-10) unstable; urgency=low * Fix FTBFS on armel (Closes: #580618) jack-audio-connection-kit (1.9.5~dfsg-9) unstable; urgency=low * Provide fix for GNU/kFreeBSD (Closes: #579465) * Conditionally include audioadapter.so on ALSA platforms only. jack-audio-connection-kit (1.9.5~dfsg-8) unstable; urgency=low * Fix endif placement in ia64/alpha patch. jack-audio-connection-kit (1.9.5~dfsg-7) unstable; urgency=low * Update to upstream SVN version 4003 * Dropping local Debian patches, they are included upstream * Add local patch for IA64 and alpha (Closes: #580089) jack-audio-connection-kit (1.9.5~dfsg-6) unstable; urgency=low * Fix more varargs issues on armel and alpha jack-audio-connection-kit (1.9.5~dfsg-5) unstable; urgency=low * Fix FTBFS on alpha and ia64 (Closes: #580089) * Fix FTBFS on armel (Closes: #580088) jack-audio-connection-kit (1.9.5~dfsg-4) unstable; urgency=low * Upgrade to svn version r3995. (Closes: bug#548537) * Fix FTBFS on kFreeBSD (conditionally enable ALSA) (Closes: bug#579465) * Fix FTBFS on powerpc (Closes: bug#579479) * Fix FTBFS caused by missing atomic operations (Closes: #579464) * Remove outdated documentation (Closes: #577444) jack-audio-connection-kit (1.9.5~dfsg-3) unstable; urgency=low * Fix restructure configure options to only add --firewire on supported architectures. Closes: bug#579237, thanks to Bastian Blank. * Handle library ABI through generalized packaging variable. * Bump API to 0.118.0 (from 9.116.2): Jackd2 1.9.4 declared compliance this newer version of jackd1, and does not promise backwards compatibility. * Tighten symbols older than API version to instead follow API: Future alternative implementations potentially of older version themselves are not assured to be binary compatible. * Have libjack-dev provide virtual libjack0.100.0-dev (...again - was apparently lost at some point upgrading to jackd2). jack-audio-connection-kit (1.9.5~dfsg-2) unstable; urgency=low * Fix relax shlibs file to libjack0 0.116.2 or newer: library API promised upstream to not have changed since. Closes: bug#579134, thanks to Reinhard Tartler. * Fix rename symbols file to get properly handled by dh_makeshlibs. * Fix tighten symbols file to depend at minimum 0.116.2 (not 0.116.0). * Suppress lintian error about newly added symbols: Any library symbols changing since 0.116.2 supposedly are outside public API. jack-audio-connection-kit (1.9.5~dfsg-1) unstable; urgency=low [ Adrian Knoth ] * New upstream version 1.9.5 * Remove FreeBoB support. Use the FFADO firewire backend instead. * Build against FFADO-2.0.0 * Enable jackdbus * Rename jack_rec to jackrec (as done in jackd1) * Include (updated) manpages from jackd1 [ Jonas Smedegaard ] * Repackage source: + Strip binary code (shared libraries for MacOS X and Windows). + Strip unneeded sources copyright-protected without licence. + Isolate as patch 0000 changes between tarball release and upstream SVN trunk as of revision 3968. * Sync control.in with latest hand-edited changes to control. * Add myself as uploader. * Drop old Vcs-Bzr stanza from control. * Fix tighten libasound-dev build-dependency to at least 1.0.18. * Lower debhelper build-dependency to 6, to ease backporting (we need no newer features anyway). Fix bump debhelper compat level to match build-dependency. * Update patch handling: + Use source format "3.0 (quilt)" (not CDBS simple-patchsys.mk). + Unfuzz. + Refresh, compacting with quilt --no-timestamps --no-index -pab. + Rename with leading number, and add README to source documenting micro naming scheme. + Add DEP3 header to patch 1001 "earlier connect.diff"). + Unpatch as last clean target to please both git-buildpackage and source format 3.0 (quilt). * Update watch file: + Track jackd2 releases. + Handle DFSG mangling. + Add usage hint. * Fix non-unix line-ends in PO files. * Tidy build routines: + Drop install hints automagically handled by CDBS already. + Separate configure from build in rules file. + Avoid unneeded ifeq construct in rules file. + Use single space in multi-line fields of control file. + Mention use of 'waf distclean' when upstream ships cleaned source. + Fix avoid remove man subdir (clash with dpkg unpatch routine). * Drop file user-howto (7 years old, newest version 4 years old, and is apparently included in jackd.1 manpage already). * Use d-shlibs to resolve development library dependencies. Build-depend on d-shlibs. * Stop build-depending on scons (upstream use waf now). * Use system waf. Build-depend on waf. - Strip binary waf file (too risky to blindly invoke, and too much hassle unpacking and inspecting properly). Stop build-depending on python (only used by local waf now dropped). * Recursively-expand a variable and avoid a linewrap in rules file. * Enable waf verbose mode (easing porter work). * Setup get-orig-source routine: + include cdbs snippet upstream-tarball (suppressing its build- dependencies unneeded for normal builds, to ease backporting). + Repackage source, stripping DFSG-violating files. * Rewrite debian/copyright using draft DEP5 rev. 135 format. * Track symbols, but relaxed due to uncertainty on public library ABI. jack-audio-connection-kit (1.9.4+svn3842-2) experimental; urgency=low * Fix a problem with jack_connect using a hard coded JACK client name and when calling jack_connect or jack_disconnect multiple times in a fast sequence, then the subsequent jack_(dis)connect calls fail. jack-audio-connection-kit (1.9.4+svn3842-1) experimental; urgency=low * New upstream release jack-audio-connection-kit (0.118+svn3796-1) unstable; urgency=low [ Jonathan Wiltshire ] * Debconf templates and debian/control reviewed by the debian-l10n- english team as part of the Smith review project. Closes: #550036 * Debconf translation updates: - Swedish (closes: #550493) - Czech (closes: #550499) - Vietnamese (closes: #550563) - Finnish (closes: #550573) - Portugese (closes: #546645, #550574) - Italian (closes: #551924) - Spanish (closes: #551977) - Russian (closes: #552005) - French (closes: #552072) [ Adrian Knoth ] * New upstream release (see NEWS, cmdline args have changed) * Drop versioned build dependency for libreadline. (Closes: #553791) * Remove Guenter Geiger from Uploaders (Closes: #546951) * Build with celt-0.7.0 (Closes: #556821) * Debconf translation updates: - Japanese (Closes: #552230) - German (Closes: #550986) - Galician (Closes: #554228) jack-audio-connection-kit (0.116.2+svn3592-3) unstable; urgency=low [ Adrian Knoth ] * Include Russian translation (Closes: #539460) * Include Czech translation (Closes: #538955) * Include Italian translation (Closes: #543514) * Actively remove /etc/init.d/jackd (Closes: #538963) * Explain new realtime default in the NEWS file (Closes: #539581) * Move FreeBoB and FFADO drivers to new package jackd-firewire, so users without 1394 devices have fewer dependencies. (Closes: #540891) * Fix FTBFS caused by double-installing the manpages (Closes: #543077) * Fix 24bit little endian problem on BE CPUs (Closes: #486308) jack-audio-connection-kit (0.116.2+svn3592-2) unstable; urgency=low [ Adrian Knoth ] * Only include alsa_in and alsa_out on Linux kernels. (Closes: #537976) * Remove /etc/init.d/jackd. (Closes: #527642, #528851) * Remove obsolete jackstart manpage (Closes: 491120) * Allow db_input to fail (Closes: #538051) jack-audio-connection-kit (0.116.2+svn3592-1) unstable; urgency=low [ Adrian Knoth ] * New upstream version 0.116.2+svn3592. Fixes: - segfault in jack_impulse_grabber (Closes: #432208) - jack_connect, jack_disconnect error case (Closes: #465073) - Fix build issues on Hurd (Closes: #320736) * Include updated upstream manpages (Closes: #386466, #521771) * Fix lintian warning about libjack-dev's doc-base * Include new alsa_in and alsa_out binaries * Prompt the user for realtime priority settings (Closes: #507248) * Include armel in dependency arch list (Closes: #460084) * Update upstream location in control files (Closes: #511239) * Enable FFADO firewire audio driver (Closes: #519797) * Remove notes about fetchmail in /etc/init.d/jackd (Closes: #530380) * Fix building jackd twice in a row (Closes: #527335) * Add myself to Uploaders jack-audio-connection-kit (0.116.1-4) unstable; urgency=low * Bugfix: FTBFS on kFreeBSD (Closes: #510127) * Remove obsolete debian/control.in file * debian/control: Reformat Build-Depends field * remove libcap-dev from Build-Depends and Build-Conflits. It was only useful for 2.4 kernels, that we don't want to support anymore anyways. (Closes: #492628) * Redirect stderr in bash completion (Closes: #504488) (LP: #139995) * use ${binary:Version} / ${source:Version} in debian/control instead of the deprecated ${Source-Version} substvar jack-audio-connection-kit (0.116.1-3) unstable; urgency=low * Don't install *.la files in libjack-dev. (Closes: #510673) * Add myself to Uploaders jack-audio-connection-kit (0.116.1-2) unstable; urgency=low * Build-Depend on libsamplerate-dev and libcelt-dev (Closes: #509579) * Use su to start jackd from the init script, because start-stop-daemon is not pam-aware and doesn't honor the settings of /etc/security/limits.conf * Maintainer is now pkg-multimedia, set control field accordingly jack-audio-connection-kit (0.116.1-1) unstable; urgency=low [ Asheesh Laroia ] * 11_fix_varargs_to_fix_ftbfs_on_alpha.patch: Add patch to fix alpha build failure by using var args correctly (earlier versions of Jack improperly passed the va_list around). The issue is fixed in upstream svn r3205. (Closes: #508114) [ Free Ekanayaka ] * New upstream release jack-audio-connection-kit (0.115.6-1) unstable; urgency=low * New Upstream Version * Drop kbsd patch, merged upstream * Not using tarball.mk anymore * Update synopsis-spelling patch * Drop path-max patch, fixed upstream jack-audio-connection-kit (0.109.2-4) unstable; urgency=low * Added init script to start jackd at system startup jack-audio-connection-kit (0.109.2-3) unstable; urgency=low * Included patches and changelog entry from NMU 0.109.2-1.1 jack-audio-connection-kit (0.109.2-2) unstable; urgency=low * Support DEB_BUILD_OPTIONS=dopot for optmised re-builds jack-audio-connection-kit (0.109.2-1.1) unstable; urgency=low * Non-maintainer upload. * Don't use lib64 and so avoid setting the rpath in other packages (Closes: #430961) * Don't create a debian/tmp/usr/lib/lib64 that points to itself. * There is no need to remove rpath's from binaries anymore so drop the build dependency on chrpath. jack-audio-connection-kit (0.109.2-1) unstable; urgency=low * New upstream release jack-audio-connection-kit (0.109.0-1) unstable; urgency=low * New upstream release * Conflicts with libjack0.80.0-0, removed reference to the non existing linux-patch-realtime-preempt (Closes: #445528) * Downgraded Recommends jackd to Suggests (Closes: #442814) * Fixed broken chunks in debian/patches/09_kbsd.patch jack-audio-connection-kit (0.103.0-6) unstable; urgency=low * debian/jackd.README.Debian: - added note about using PAM to jack grant realtime privileges (Closes: #425180, #269661) - added note about using the realtime-preempt patch * debian/control: - added libpam-modules to Recommends: - moved qjackctl from Suggests: to Recommends: * debian/rules: - pass --enable-static=yes to ./configure (Closes: #425265) - don't enable -m3dnow and -msse on i386 (Closes: #426144) * rebuilt against flac 1.1.4 (Closes: #426648) jack-audio-connection-kit (0.103.0-5) unstable; urgency=low * debian/control: - build-depend on libfreebob0-dev only on Linux systems, to solve FTBFS on GNU/kFreeBSD (closes: #423895) * debian/patches: - added 09_kbsd.patch, thanks to Petr Salinger * debian/rules: - removed enable-capabilities, as it was causing excessive CPU load when using audacity 1.3.0 with jack via portaudio jack-audio-connection-kit (0.103.0-4) unstable; urgency=low * debian/rules: - enable -m3dnow and -msse on i386 and amd64 - include simple-patchsys.mk before tarball.mk (closes: #422588) jack-audio-connection-kit (0.103.0-3.1) unstable; urgency=low * Non-maintainer upload. * Drop i386 specific CFLAGS. (closes: #422289) jack-audio-connection-kit (0.103.0-3) unstable; urgency=low * We are not depending anymore from automake (see changelog entry for version 0.103.0-1), and now the package builds corretly on a plain etch system (Closes: #328133) * Enable dynsimd only for amd64 (Closes: #422076) jack-audio-connection-kit (0.103.0-2) unstable; urgency=low * Enabled dynsimd jack-audio-connection-kit (0.103.0-1) unstable; urgency=low * New upstream release * Drop patches 04_configure_in_jack_version and 02_release-in-libjack-name as suggested by the upstream developers, they will take are of possible ABI changes and add the relevant runtime checks * Drop patch 11_configure.ac, fixed the lib64 path in debian/rules * Drop build-dependency on automake * Rename the library and the headers binary packages to reflect the fact that they are not anymore version-specific, add dummy binary package libjack0.100.0 to provide an upgrade path * Bug fix: "libjack0.100.0-0: 04_configure_in_jack_version.patch makes add-on installation complicated", thanks to Mario Lang (Closes: #353680). * Remove rpath from the binary programs using chrpath * Enabled SSE (see) jack-audio-connection-kit (0.102.20-1) experimental; urgency=low * New upstream release * Deleted patches 10_freebob.patch and 09_jack-ia64.diff, as they are part of the new upstream * Build-Depend on libfreebob0-dev, thanks to Marcio Roberto Teixeira (Closes: #399246). * Added myself to Uploaders * Updated standards to 3.7.2 jack-audio-connection-kit (0.101.1-2) unstable; urgency=low * incorporated patch to fix ia64 atomic operations (closes: #394021) * prepared for freebob support by backporting the freebob driver from jack svn version. jack-audio-connection-kit (0.101.1-1) unstable; urgency=low * new upstream release + no freebob support since libfreebob is not yet packaged jack-audio-connection-kit (0.100.7-1) unstable; urgency=low * new upstream release jack-audio-connection-kit (0.100.0-5) unstable; urgency=low * debian/bash-completion.d/jackd: updated; see Bug#329806 * dont depend on libglib1.2-dev; closes: Bug#326212 * debian/watch: update jack-audio-connection-kit (0.100.0-4) unstable; urgency=low * debian/bash_completion.d/jackd, debian/jackd.install: closes: #319764 (jackd: bash completion for jack_connect) jack-audio-connection-kit (0.100.0-3) unstable; urgency=low * debian/rules: build with tmpdir=/dev/shm again; closes: Bug#321149 (jackd not using /dev/shm as tmpdir) * debian/patches/07_path-max.patch: closes: Bug#320736 (Patch to handle PATH_MAX-less systems) * debian/patches/08_synopsis-spelling.patch: closes: #311465 ('man jack_bufsize' typo: "SYNOPSYS") * debian/copyright: mention authors as copyright holders and license as license and not as copyright; closes: #290186 (Improper copyright file) jack-audio-connection-kit (0.100.0-2) unstable; urgency=low * upload 0.100.0-1 unchanged to unstable jack-audio-connection-kit (0.100.0-1) experimental; urgency=low * new upstream release * new SONAME again. noone was using that 0.99.61 from experimental and it confuses people. jack-audio-connection-kit (0.99.61-1) experimental; urgency=low * intermediate snapshot release + new jack_client_open() interface requiring a new SONAME: 0.99.61 (although the first incompatible change happened at least in 0.99.14) + debian/patches/04_configure_in_jack_version.patch: updated * debian/FAQ: updated from webpage jack-audio-connection-kit (0.99.0-6) unstable; urgency=high * do not use the "[system: linux]" stuff for "Depends"; patch from Daniel Schepler <email address hidden>; closes: Bug#295804 * urgency high due to things in 0.99.0-5 * debian/control: build against libreadline5-dev jack-audio-connection-kit (0.99.0-5) unstable; urgency=high * debian/patches/01a_force-copy-autogen.sh.patch: dont make symlinks for config.{guess,sub}; let cdbs play with it; closes: Bug#295284 * urgency high because the bug is present in testing and will surface as soon as the next cdbs gets there * debian/control.in, rules: use cdbs' new crazy mechanism of mangling control; closes: Bug#272307 jack-audio-connection-kit (0.99.0-4) unstable; urgency=low * debian/control.in: added a Recommends: jackd (= ${Source-Version}) to libjack to express the need to install jackd in order to get a working libjack. jack-audio-connection-kit (0.99.0-3) unstable; urgency=low * moved Junichi to Uploaders and me to Maintainer in debian/control. Thanks, Junichi, for your great work reagrding this package! * debian/control.in, debian/rules: added kfreebsd-gnu handling; thanks to Robert Millan <email address hidden>; closes: Bug#272307 * debian/control.in: convert "Depends: jackd" into a two-sided conflicts; closes: Bug#248665 jack-audio-connection-kit (0.99.0-2) unstable; urgency=medium * upload unchanged to unstable; * urgency medium because of important fixes for i586 users (Bug#266975) and NPTL workaround (Bug#266507) jack-audio-connection-kit (0.99.0-1) experimental; urgency=low * new upstream release + works around pthread-create bug in glibc (Bug#266507) + debian/patches/03_remove-cpp-atomicity.patch, debian/patches/07_dont_add_readline_to_LIBS.patch: removed since applied upstream + --disable-iec61883 since it doesn't compile * uplod to experimental to not hinder the other version entering testing * debian/rules: don't optimize for i686 on i386; closes: Bug#266975 jack-audio-connection-kit (0.98.1-5) unstable; urgency=medium * debian/patches/03_remove-cpp-atomicity.patch: use a regular patch for removing the atomicity files for sparc and hppa * debian/patches/07_dont_add_readline_to_LIBS.patch: added; add a noop to prevent linking libjack0.80.0-0 against libreadline4; closes: Bug#260954, Bug#260961; urgency medium because this breaks other packages' builds * correct spelling error in debian/jackd.README.Debian: powerful * debian/watch: added jack-audio-connection-kit (0.98.1-4) unstable; urgency=low * debian/rules: remove atomicity.h for hppa and sparc to work around the FTBS; closes: Bug#256221 for now jack-audio-connection-kit (0.98.1-3) unstable; urgency=low * upload experimental version unchanged to unstable jack-audio-connection-kit (0.98.1-2) experimental; urgency=low * debian/shlibs.local: use ${Source-Version}, thanks to Elimar Riesebieter <email address hidden> jack-audio-connection-kit (0.98.1-1) experimental; urgency=low * new upstream release + fulfills whishes for init.d scripts or other ways of automatically starting jackd if applications need it (analogy to esd); set JACK_START_SERVER in your environment to enable it; closes: Bug#169776 + non-existent function has been removed from the header; closes: Bug#245742 * debian/FAQ: updated from webpage * debian/README.developers: updated from webpage * debian/user-howto: updated from webpage jack-audio-connection-kit (0.96.2-1) experimental; urgency=low * new upstream snapshot from CVS branch EXP to test new autoconf structure. + also contains new features, does not break binary compatibility + probably won't compile on arm because the generic atomicity.h implementation is broken. * debian/patches/20-check-rc-from-initialize-shm.diff: integrated upstream jack-audio-connection-kit (0.94.0-4) unstable; urgency=low * the "you are never entirely done" release * forgot to update debian/shlibs.local jack-audio-connection-kit (0.94.0-3) unstable; urgency=low * libjack0.80.0-dev should indeed be Section: libdevel. I don't know, where the actual change disapeared. jack-audio-connection-kit (0.94.0-2) unstable; urgency=low * debian/control: libjack0.80.0-dev is Section: libdevel * debian/jackd.README.Debian, debian/rules: + add a lot of documentation and support for setuid jackstart; addresses our part of Bug#229709, which can then be reassigned to wnpp + manpages, FAQ, a user-howto and a extensive README.Debian are there; closes: Bug#148933 + add documentation and support for /dev/shm as tmpdir; needs libc6 >= 2.3.2.ds1-11 because they create and mount the tmpfs; closes: Bug#229374 + added hints to the lowlatency and preempt patches in Debian and to the givertcap patch in AGNULA. + add information and pointers to the realtime LSM * debian/control: + jackd "Suggests: qjackctl, jack-tools, meterbridge, libjackasyn0" which create a sufficient and extended toolkit and environment for jackd + libjack0.80.0-0: Depends: jackd (= ${Source-Version}), for discussion see l.d.o/debian-multimedia * debian/patches/20-check-rc-from-initialize-shm.diff: added; addresses the remaining notes and finally closes: Bug#234072 jack-audio-connection-kit (0.94.0-1) unstable; urgency=low * new upstream release + fixes command-line parsing * rewrote urls as <http://...> as recommended in RFC 2396, Appendix E * JACK 0.75.0 entered testing; uploading to unstable jack-audio-connection-kit (0.91.1-1) experimental; urgency=low * New upstream release + does not break binary compatibility + enable experimental firewire drivers - debian/control: Build-Depends: libraw1394-dev - debian/patches/06_iec61883_headers.patch add files missing from tarball + obsoletes debian/patches/03_cpuinfo_other_archs.patch * debian/FAQ: updated from webpage * debian/libjack0.80.0-dev.install: don't install *.la files for jack plugins * debian/shlibs.local: added: remove duplicate depends on libjack for jackd * debian/jack_freewheel.1, debian/jack_bufsize.1: wrote manpages jack-audio-connection-kit (0.80.0-1) experimental; urgency=low * new upstream release; binary compatibility mostly remains, source compatibility breaks, chose the new soname; upload to experimental * debian/patches/03_cycles-h-other-archs.patch: partially integrated upstream * debian/patches/03_cpuinfo_other_archs.patch: parses cpuinfo on other architectures by Junichi Uekawa; closes: #207435 * debian/control: changed <email address hidden> to <email address hidden> jack-audio-connection-kit (0.75.0-2) unstable; urgency=low * Add replaces libjack0.71.2-0 (<< 0.75.0-1), due to moved 'development binaries for plugins'. (closes: #207731) * Standards-Version: 3.6.1 jack-audio-connection-kit (0.75.0-1) unstable; urgency=low * new upstream release * debian/rules: switched to cdbs + tarball.mk: great way to ensure the autotools horror doesn't pollute the diff + simple-patchsys.mk: works great with the dpatches renamed. + debian/rules: builds optimized for i386: closes: #202589 have a nonoptimized version with DEB_BUILD_OPTIONS=noopt as per policy * debian/control: + use dh-buildinfo + new comaintainers Guenter Geiger <email address hidden> and Robert Jordens <email address hidden> + Standards-Version: 3.6.0: no changes + won't change the package name (and release of libjack) again because binary compatibility didn't break. JACK_API_CURRENT will be set to 1 as soon as the new era of binary compatibility starts upstream. Then libjack0.71.2 will be named libjack1. closes: #205552 + Build-Depends: libreadline4-dev to build jack_transport * debian/lib*.install: moved the jack plugins' development versions to libjack0.71.2-dev * debian/libjack0.71.2-dev.docs: added README.developers to libjack0.71.2-dev and ship it as a file, not as a patch * debian/FAQ: updated from webpage * 03_cycles-h-other-archs.patch: updated with code from the kernel headers that works well for ardour, thus reducing the number of archs where the workaround is used to: arm, sparc, m68k * debian/jack_load.1, debian/jack_unload.1, debian/jack_transport.1, debian/jack_monitor_client.1, debian/jack_simple_client.1: wrote the missing manpages. * debian/user-howto: added to jackd documentation and updated from webpage jack-audio-connection-kit (0.71.2-1) unstable; urgency=low * New upstream release * Update patches: 04_configure_in_jack_version: update for 0.71.2 02_version-soname: update. * debian/*: manually edit for 0.71.2 debian/jackd.manpages: use upstream manpages for jackd and jackstart * add rules to build jack_md5.h before jackstart * FAQ update rules fixed to add changelog entry * build in a subdir * debian/FAQ: updated from webpage * debian/rules: use patch-stamp instead of patch * [05_jack_md5h.dpatch] fix jack_md5.h dependency * run autoconf2.5/automake1.7 over source jack-audio-connection-kit (0.50.0-2) UNRELEASED; urgency=low * Use dpatch to manage patches. 01_readme-developers 02_version-soname 03_cycles-h 04_configure_in_jack_version - autoconf/automake needs to be re-ran after applying those patches, added a rule to do that to debian/rules (auto-run) * debian/rules: fix to properly handle autoconf 2.57-generated configure, instead of 2.13 jack-audio-connection-kit (0.50.0-1) unstable; urgency=low * New upstream release, new maintainer. * Use DESTDIR instead of prefix= in install target. * Misc updating, forward-porting of patches, etc. * I am keeping the soname convention as it is, since upstream is still not decided on a stable interface. * re-run aclocal/autoconf/automake. * use w3m instead of lynx to get the FAQ jack-audio-connection-kit (0.44.0-1) unstable; urgency=low * New upstream release (CVS) * Re-add some missing binaries and manpages that got lost somehow. jack-audio-connection-kit (0.40.1-1) unstable; urgency=low * New upstream release (CVS) * Keep library versioning based on package version although upstream doesn't anymore, as long as different "releases" (CVS snapshots actually) aren't guaranteed to be binary compatible. * JACK now doesn't depend on glib anymore (closes: #154773). jack-audio-connection-kit (0.38.0-1) unstable; urgency=low * New upstream release (CVS). jack-audio-connection-kit (0.37.2-1) unstable; urgency=low * New upstream release (CVS). As there doesn't seem to be a release in sight, and cvs has been on it's current status for a while i decided to switch to the cvs version. Unfortunately this means recompiling for packages depending on libjack again... * Ship a changelog generated by cvs2cl from the upstream sources. Upstream doesn't unfortunately. jack-audio-connection-kit (0.34.0-6) unstable; urgency=low * Added patch for jackrec to build with libsndfile1. * debian/control: build-dep on libsndfile1-dev jack-audio-connection-kit (0.34.0-5) unstable; urgency=low * Change address in debian/control as well... jack-audio-connection-kit (0.34.0-4) unstable; urgency=low * New maintainer email address * Applied patch by Junichi Uekawa to remove the unconditional error from cycles.h to hopefully enable build on more architectures (closes: #148699). jack-audio-connection-kit (0.34.0-3) unstable; urgency=low * Small manpage updates * Added ALSA-related URLs to debian/asound.rc * Renamed libjack0 to libjack0.34.0-0 and relaxed shlibs dependency (closes: #149687) * Removed jack_alsa from shlibs file * Fixed FAQ line-length * Removed autogen.sh from the diff.gz * debian/rules - Avoid stripping of the jackd binary if DEB_BUILD_OPTIONS=nostrip is set - Made configure a phony target again - Added faq target to fetch the FAQ from the website * libjack0.34.0-dev: added dependency on pkg-config (closes: #150089) * Removed some of the less useful example clients, upstream will do the same in the next release, build-dep on libfltk could be dropped jack-audio-connection-kit (0.34.0-2) unstable; urgency=low * Added more documentation (first step to address #148933) - added manpages - added w3m -dump'ed version of the FAQ from the website - added example .asoundrc - remove rather pointless upstream README * Applied patch by Junichi Uekawa to enable build on ppc * Removed maintainer-only rules from debian/rules jack-audio-connection-kit (0.34.0-1) unstable; urgency=low * Repackaged from scratch. Thanks to Junichi Uekawa for his previous work on jack packaging and for useful hints how to get my package into a releasable state! (closes: #141450) * New upstream release jack (0.8.0.cvs) unstable; urgency=low * cvs update jack (0.6.0.cvs) unstable; urgency=low * CVS Checkout source, packaging it. jack (0.4.7-1) unstable; urgency=low * Initial attempt to create a Debian package out of the Sourceforge file release. -- ????? ????? <email address hidden> Wed, 14 Jul 2010 11:22:39 +0100
Upload details
- Uploaded by:
- Artem Popov on 2010-07-14
- Original maintainer:
- Debian Multimedia Maintainers
- Architectures:
- any
- Section:
- sound
- Urgency:
- Very Urgent
See full publishing history Publishing
Downloads
Binary packages built by this source
- jackd2: No summary available for jackd2 in ubuntu maverick.
No description available for jackd2 in ubuntu maverick.
- jackd2-firewire: No summary available for jackd2-firewire in ubuntu maverick.
No description available for jackd2-firewire in ubuntu maverick.
- libjack-jackd2-0: No summary available for libjack-jackd2-0 in ubuntu maverick.
No description available for libjack-jackd2-0 in ubuntu maverick.
- libjack-jackd2-dev: No summary available for libjack-jackd2-dev in ubuntu maverick.
No description available for libjack-jackd2-dev in ubuntu maverick.
|
https://launchpad.net/ubuntu/+source/jackd2/1.9.5~dfsg-17
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Bracket
Bracket is an extension of
MonadError exposing the
bracket
operation, a generalized abstracted pattern of safe resource
acquisition and release in the face of errors or interruption.
Important note, throwing in
release function is undefined since the behavior is left to the
concrete implementations (ex. cats-effect
Bracket[IO], Monix
Bracket[Task] or ZIO).
import cats.MonadError sealed abstract class ExitCase[+E] trait Bracket[F[_], E] extends MonadError[F, E] { def bracketCase[A, B](acquire: F[A])(use: A => F[B]) (release: (A, ExitCase[E]) => F[Unit]): F[B] // Simpler version, doesn't distinguish b/t exit conditions def bracket[A, B](acquire: F[A])(use: A => F[B]) (release: A => F[Unit]): F[B] }
|
https://typelevel.org/cats-effect/typeclasses/bracket.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Kleisli
Kleisli enables composition of functions that return a monadic value, for instance an
Option[Int]
or a
Either[String, List[Double]], without having functions take an
Option or
Either as a parameter,
which can be strange and unwieldy.
We may also have several functions which depend on some environment and want a nice way to compose these functions to ensure they all receive the same environment. Or perhaps we have functions which depend on their own “local” configuration and all the configurations together make up a “global” application configuration. How do we have these functions play nice with each other despite each only knowing about their own local requirements?
These situations are where
Kleisli is immensely helpful.
Functions
One of the most useful properties of functions is that they compose. That is, given a function
A => B and a function
B => C, we can combine them to create a new function
A => C. It is through
this compositional property that we are able to write many small functions and compose them together
to create a larger one that suits our needs.
val twice: Int => Int = x => x * 2 val countCats: Int => String = x => if (x == 1) "1 cat" else s"$x cats" val twiceAsManyCats: Int => String = twice andThen countCats // equivalent to: countCats compose twice
Thus.
twiceAsManyCats(1) // "2 cats" // res0: String = 2 cats
Sometimes, our functions will need to return monadic values. For instance, consider the following set of functions.
val parse: String => Option[Int] = s => if (s.matches("-?[0-9]+")) Some(s.toInt) else None val reciprocal: Int => Option[Double] = i => if (i != 0) Some(1.0 / i) else None
As it stands we cannot use
Function1.compose (or
Function1.andThen) to compose these two functions.
The output type of
parse is
Option[Int] whereas the input type of
reciprocal is
Int.
This is where
Kleisli comes into play.
Kleisli
At its core,
Kleisli[F[_], A, B] is just a wrapper around the function
A => F[B]. Depending on the
properties of the
F[_], we can do different things with
Kleislis. For instance, if
F[_] has a
FlatMap[F] instance (we can call
flatMap on
F[A] values), we can
compose two
Kleislis much like we can two functions.
import cats.FlatMap import cats.implicits._ final case class Kleisli[F[_], A, B](run: A => F[B]) { def compose[Z](k: Kleisli[F, Z, A])(implicit F: FlatMap[F]): Kleisli[F, Z, B] = Kleisli[F, Z, B](z => k.run(z).flatMap(run)) }
Returning to our earlier example:
// Bring in cats.FlatMap[Option] instance import cats.implicits._ val parse: Kleisli[Option,String,Int] = Kleisli((s: String) => if (s.matches("-?[0-9]+")) Some(s.toInt) else None) val reciprocal: Kleisli[Option,Int,Double] = Kleisli((i: Int) => if (i != 0) Some(1.0 / i) else None) val parseAndReciprocal: Kleisli[Option,String,Double] = reciprocal.compose(parse)
Kleisli#andThen can be defined similarly.
It is important to note that the
F[_] having a
FlatMap (or a
Monad) instance is not a hard requirement -
we can do useful things with weaker requirements. Such an example would be
Kleisli#map, which only requires
that
F[_] have a
Functor instance (e.g. is equipped with
map: F[A] => (A => B) => F[B]).
import cats.Functor final case class Kleisli[F[_], A, B](run: A => F[B]) { def map[C](f: B => C)(implicit F: Functor[F]): Kleisli[F, A, C] = Kleisli[F, A, C](a => F.map(run(a))(f)) }
Below are some more methods on
Kleisli that can be used so long as the constraint on
F[_]
is satisfied.
Method | Constraint on `F[_]` --------- | ------------------- andThen | FlatMap compose | FlatMap flatMap | FlatMap lower | Monad map | Functor traverse | Applicative
Type class instances
The type class instances for
Kleisli, like that for functions, often fix the input type (and the
F[_]) and leave
the output type free. What type class instances it has tends to depend on what instances the
F[_] has. For
instance,
Kleisli[F, A, B] has a
Functor instance so long as the chosen
F[_] does. It has a
Monad
instance so long as the chosen
F[_] does. The instances in Cats are laid out in a way such that implicit
resolution will pick up the most specific instance it can (depending on the
F[_]).
An example of a
Monad instance for
Kleisli is shown below.
Note: the example below assumes usage of the kind-projector compiler plugin and will not compile if it is not being used in a project.
import cats.implicits._ // We can define a FlatMap instance for Kleisli if the F[_] we chose has a FlatMap instance // Note the input type and F are fixed, with the output type left free implicit def kleisliFlatMap[F[_], Z](implicit F: FlatMap[F]): FlatMap[Kleisli[F, Z, ?]] = new FlatMap[Kleisli[F, Z, ?]] { def flatMap[A, B](fa: Kleisli[F, Z, A])(f: A => Kleisli[F, Z, B]): Kleisli[F, Z, B] = Kleisli(z => fa.run(z).flatMap(a => f(a).run(z))) def map[A, B](fa: Kleisli[F, Z, A])(f: A => B): Kleisli[F, Z, B] = Kleisli(z => fa.run(z).map(f)) def tailRecM[A, B](a: A)(f: A => Kleisli[F, Z, Either[A, B]]) = Kleisli[F, Z, B]({ z => FlatMap[F].tailRecM(a) { f(_).run(z) } }) }
Below is a table of some of the type class instances
Kleisli can have depending on what instances
F[_] has.
Type class | Constraint on `F[_]` -------------- | ------------------- Functor | Functor Apply | Apply Applicative | Applicative FlatMap | FlatMap Monad | Monad Arrow | Monad Split | FlatMap Strong | Functor SemigroupK* | FlatMap MonoidK* | Monad
*These instances only exist for Kleisli arrows with identical input and output types; that is,
Kleisli[F, A, A] for some type A. These instances use Kleisli composition as the
combine operation,
and
Monad.pure as the
empty value.
Also, there is an instance of
Monoid[Kleisli[F, A, B]] if there is an instance of
Monoid[F[B]].
Monoid.combine here creates a new Kleisli arrow which takes an
A value and feeds it into each
of the combined Kleisli arrows, which together return two
F[B] values. Then, they are combined into one
using the
Monoid[F[B]] instance.
Other uses
Monad Transformers
Many data types have a monad transformer equivalent that allows us to compose the
Monad instance of the data
type with any other
Monad instance. For instance,
OptionT[F[_], A] allows us to compose the monadic properties
of
Option with any other
F[_], such as a
List. This allows us to work with nested contexts/effects in a
nice way (for example, in for-comprehensions).
Kleisli can be viewed as the monad transformer for functions. Recall that at its essence,
Kleisli[F, A, B]
is just a function
A => F[B], with niceties to make working with the value we actually care about, the
B, easy.
Kleisli allows us to take the effects of functions and have them play nice with the effects of any other
F[_].
This may raise the question, what exactly is the “effect” of a function?
Well, if we take a look at any function, we can see it takes some input and produces some output with it, without
having touched the input (assuming the function is pure, i.e.
referentially transparent).
That is, we take a read-only value, and produce some value with it. For this reason, the type class instances for
functions often refer to the function as a
Reader. For instance, it is common to hear about the
Reader monad.
In the same spirit, Cats defines a
Reader type alias along the lines of:
// We want A => B, but Kleisli provides A => F[B]. To make the types/shapes match, // we need an F[_] such that providing it a type A is equivalent to A // This can be thought of as the type-level equivalent of the identity function type Id[A] = A type Reader[A, B] = Kleisli[Id, A, B] object Reader { // Lifts a plain function A => B into a Kleisli, giving us access // to all the useful methods and type class instances def apply[A, B](f: A => B): Reader[A, B] = Kleisli[Id, A, B](f) } type ReaderT[F[_], A, B] = Kleisli[F, A, B] val ReaderT = Kleisli
The
ReaderT value alias exists to allow users to use the
Kleisli companion object as if it were
ReaderT, if
they were so inclined.
The topic of functions as a read-only environment brings us to our next common use case of
Kleisli - configuration.
Configuration
Functional programming advocates the creation of programs and modules by composing smaller, simpler modules. This philosophy intentionally mirrors that of function composition - write many small functions, and compose them to build larger ones. After all, our programs are just functions.
Let’s look at some example modules, where each module has its own configuration that is validated by a function.
If the configuration is good, we return a
Some of the module, otherwise a
None. This example uses
Option for
simplicity - if you want to provide error messages or other failure context, consider using
Either instead.] = ??? }
We have two independent modules, a
Db (allowing access to a database) and a
Service (supporting an API to provide
data over the web). Both depend on their own configuration parameters. Neither know or care about the other, as it
should be. However our application needs both of these modules to work. It is plausible we then have a more global
application configuration.
case class AppConfig(dbConfig: DbConfig, serviceConfig: ServiceConfig) class App(db: Db, service: Service)
As it stands, we cannot use both
Kleisli validation functions together nicely - one takes a
DbConfig, the
other a
ServiceConfig. That means the
FlatMap (and by extension, the
Monad) instances differ (recall the
input type is fixed in the type class instances). However, there is a nice function on
Kleisli called
local.
final case class Kleisli[F[_], A, B](run: A => F[B]) { def local[AA](f: AA => A): Kleisli[F, AA, B] = Kleisli(f.andThen(run)) }
What
local allows us to do is essentially “expand” our input type to a more “general” one. In our case, we
can take a
Kleisli that expects a
DbConfig or
ServiceConfig and turn it into one that expects an
AppConfig,
so long as we tell it how to go from an
AppConfig to the other configs.
Now we can create our application config validator!
final case class Kleisli[F[_], Z, A](run: Z => F[A]) { def flatMap[B](f: A => Kleisli[F, Z, B])(implicit F: FlatMap[F]): Kleisli[F, Z, B] = Kleisli(z => F.flatMap(run(z))(a => f(a).run(z))) def map[B](f: A => B)(implicit F: Functor[F]): Kleisli[F, Z, B] = Kleisli(z => F.map(run(z))(f)) def local[ZZ](f: ZZ => Z): Kleisli[F, ZZ, A] = Kleisli(f.andThen(run)) }] = ??? } case class AppConfig(dbConfig: DbConfig, serviceConfig: ServiceConfig) class App(db: Db, service: Service) def appFromAppConfig: Kleisli[Option, AppConfig, App] = for { db <- Db.fromDbConfig.local[AppConfig](_.dbConfig) sv <- Service.fromServiceConfig.local[AppConfig](_.serviceConfig) } yield new App(db, sv)
What if we need a module that doesn’t need any config validation, say a strategy to log events? We would have such a
module be instantiated from a config directly, without an
Option - we would have something like
Kleisli[Id, LogConfig, Log] (alternatively,
Reader[LogConfig, Log]). However, this won’t play nice with our other
Kleislis since those use
Option instead of
Id.
We can define a
lift method on
Kleisli (available already on
Kleisli in Cats) that takes a type parameter
G[_]
such that
G has an
Applicative instance and lifts a
Kleisli value such that its output type is
G[F[B]]. This
allows us to then lift a
Reader[A, B] into a
Kleisli[G, A, B]. Note that lifting a
Reader[A, B] into some
G[_]
is equivalent to having a
Kleisli[G, A, B] since
Reader[A, B] is just a type alias for
Kleisli[Id, A, B], and
type Id[A] = A so
G[Id[A]] is equivalent to
G[A].
|
https://typelevel.org/cats/datatypes/kleisli.html
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
In this article, we are going to download the dataset that we are going to use for our model.
This is article is the second part of the series about this amazing Segmentation Network used for the task of semantic segmentation. Furthermore, this Segmentation Network brings a novel approach to decouple the function of spatial information preservation (high-resolution features) and receptive field offering two paths. Specifically, it is proposed a Bilateral Segmentation Network(BiSeNet) with a Spatial Path(SP) and a Context(CP). This two paths came to fix previous approaches used in the semantic segmentation task, that compromise of accuracy over speed.
I have alredy covered Semantic segmentation and BiSeNet introduction, its not in the scope of this blog post. Therefore, for more details about BiSeNet with SP & CP, please click on the links to see my previous blog posts.
Without further ado, let’s get this show on the road!
Dataset
A data set (or dataset) is a collection of data. Most commonly a data set corresponds to the contents of a single database table. i.e An excel spreadsheet is a great example of a dataset.
First things first, we have got to have data to train our Artificial Intelligence(AI) algorithm, for example: In supervised learning we want the AI algorithm to learn the mapping between the input(x) and output(y), so that given a new input(x) that is not part of the dataset it was trained on it can predict the output(y).
In order to get a dataset, we can collect it ourselves via web scraping or download a dataset someone else put together. There are some pretty famous dataset repositories which offer us a variety of datasets for many ends, such as :
- ImageNet
- CamVid (which is the dataset we will be using)
- COCO stuff
- Kaggle and etc..
Unlike many think, Kaggle does not only host Data Science competitions, it also a dataset and kernel repositories, where Data Scientist share their dataset and their kernels that give more insight about their datasets.
For our implementation of the BiSeNet, we are going to use a dataset called CamVid, that was mentioned in the research paper by the researchers that invented BiSeNet.
The CamVid dataset is a street scene dataset from the perspective of a driving automobile. It is composed of 701 images in total, in which 367 for training, 101 for validation and 233 for testing. The images have a resolution of 960×720 and 11 semantic categories/labels.
To download the dataset and labels click here.
Preprocessing and Visualizing. Furthermore, we might not use the entire dataset because some features are not important or relevant to our problem.
For example if we want an algorithm to distinguish dogs from cats, and our dataset contains their pictures in one column and the name of the owners in the other column as input features, we can discard the name of the owners column because it does not contribute at all to distinguish dogs from cats, all we need is the features like shape of the ears , nose, and type of fur, which are in picture’s column.
Implementation
In this section, I’m going to present you the code I use to load the files, convert them into a Numpy array and plot them using jupyter notebooks.
The first step is to import the libraries we are going to use to manipulate the data and get the path where the dataset is saved, whether on your laptop or cloud.
import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#image_path is defined as a global variable
image_path = "path to where you saved your dataset"
After this, we are going to write a function which receives path as a parameter in order to get all the files in that path and put them in a list.
def loadImages(path):
image_files = sorted([os.path.join(path, 'train', file)
for file in os.listdir(path + "/train") if file.endswith('.png')])
annotation_files = sorted([os.path.join(image_path, 'trainannot', file)
for file in os.listdir(path + "/trainannot") if file.endswith('.png')])
image_val_files = sorted([os.path.join(path, 'val', file)
for file in os.listdir(path + "/val") if file.endswith('.png')])
annotation_val_files = sorted([os.path.join(path, "valannot", file)
for file in os.listdir(path + "/valannot") if file.endswith('.png')])
return image_files, annotation_files, image_val_files, annotation_val_files
Then we can make the main function that calls the loadImages() function and passes the parameter path. Algorithms can’t read an image as they are, we need to convert them into a 2d array so we can then feed it to the algorithm and visualize it.
def main():
#calling global variable
global image_path
# The var Dataset is a list of size 4 that gets all files from different folders so we can access different folders using dataset[0...3]
dataset = loadImages(image_path)
train_set = dataset[0]
r = [i for i in train_set]
print(r[0])
img_1 = mpimg.imread(r[0])
display = plt.imshow(img_1)
print(img_1)
main()
For the full code used in this post, here is the link for my GitHub.
This repo contains the code for a fierce t attempt to implement this amazing Research paper. …github.com
This concludes the Part II of this series about BiSeNet, stay tuned for more amazing content and Part II with the code for implementing this state-of-the-art Real-time semantic segmentation Network research paper.
Thank you for reading if you have any thoughts, comments or critics please comment down below.
If you like it please give me a round of applause👏👏 👏and share it with your friends.
Resources:
In this post, I’m going to give you an introduction to Bilateral Segmentation Network for Real-time Semantic…medium.com
The recipe of the Success of BiSeNet (Bilateral Segmentation Network).medium.com
Data is measured, collected and reported, and analyzed, whereupon it can be visualized using graphs, images or other…en.wikipedia.org
In relational databases and flat file databases, a table is a set of data elements (values) using a model of vertical…en.wikipedia.org
Source: Deep Learning on Medium
|
http://mc.ai/bisenet-for-real-time-segmentation-part-ii/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
“The introduction of new object-oriented programming features in PHP V5 has significantly raised the level of functionality in this programming language. Not only can you have private, protected, and public member variables and functions – just as you would in the Java,.”
Going Dynamic with PHP v5
About The Author
Thom Holwerda
Follow me on Twitter @thomholwerda
35 Comments
2006-02-20 7:39 pmMightyPenguin
I want to know what scripting language doesn’t have these problems? Pretty much all semi-commonly used scripting languages allow these things.
I do agree with you that this is one of the reasons that scripting languages are frowned on in larger projects with many developers because the language doesn’t inforce good coding practices and strong type checking like Java for instance.
>For me, this is a no-no, force people to declare their
>variables and specify a type. Makes poor programmers.
Hmm.. what are you talking about. Even in C you declare variables where you need them. It’s common in many languages to do this. In fact I think it makes the code cleaner beacuse you are not reading code thinking “where the hell did that variable come from” because its declared right there and you know that it is, what type it is and what it is for.
C Example:
int count = 1;
while (count <= 100)
{
printf(“%d
“,count);
count += 1;
}
Your just bitching for no reason.
2006-02-20 6:56 pmkamper
Even in C you declare variables where you need them. It’s common in many languages to do this.
I don’t think he was so much talking about where you declare variables, so much as whether or not you actually have to declare and strongly type them. Sure, you can easily argue in favour of php’s looser model, but both approaches are quite valid. The interesting thing is deciding when to use each one.
2006-02-20 8:17 pmphoenix
You miss his point entirely.
In C, C++, Java, etc you have to declare a variable before you use (give it a name, a type, etc).
In PHP, you just use the variable..
2006-02-20 9:19 pmunoengborg
It is also much easier to create tools to help you write the correct variable names in a typed language. E.g. if you write java code in Eclipse or any other modern java tool pressing the dot after an object name will present you with a popup with all methods and variables available on that object so that you easily can do auto complete.
In some cases (like in Eclipse) it even shows whatever documentation you made for that method.
Another problem with languages like php is that the lack of typed variables, makes testing much more complex if you want to make sure that the program really does what it is supposed to do. In a large project the work of writing such tests takes much more time than declaring variables.
As for the example in the article, there are other ways to do things like this. The article assumes that you mapps the fields in the database to fields in PHP objects.
Why not put more functionality into the database using stored procedures. That way the application will be much faster, if done right you will also have a natural way to get separation between function in the database and presentation written in PHP. By using template systems such as PHPTal, the separation could be even clearer.
The downside is of course that you will get hard ties to whatever database engine you are using. This is usually not a big problem unless you are making some kind of product in PHP, that you intend to sell to others to use as you then would like the same code to run on as many databases as possible. However if you do some coding for your own organization, it is highly likely that the database infrastructure will last a lot longer than the look of the website.
2006-02-21 9:10 amedmnc
.”
You can easely change the error warning level in PHP configuration (or at runtime) and PHP will display Notice for each undeclared variable in your code.
2006-02-20 8:47 pmwerpu
Actually there used to be days, when it was taught to be a good design decision to have explicite variable declaration on top of the algorithm and a clean interface and implementation separation.
And given the fact that I constantly have to switch between non declarative and declartive languages, I at least agree with the declaration part, it makes the code way more readable once the stuff gets bigger.
As for the interface implementation separation, that stuff fortunately can be covered better by autodoc tools.
2006-02-21 2:56 amCrLf
“Even in C you declare variables where you need them.”
That’s just been possible since the ’99 revision of the standard (C99), which not every compiler implements fully (I think gcc if compliant).
Now, some folks might have a distorted view of C, since they really have been compiling stuff with a C++ compiler (which no, C isn’t a subset of C++ anymore).
you can also create objects that bend at runtime, creating new methods and member variables on the fly. You can’t do that with the Java, C++, or C# languages.
Not that I have a problem with people exploring this type of functionality, but I just want to point out that this is not a deficiency in the other languages. They’re aimed quite firmly at a different development model. It would be dead simple to put this sort of stuff into a virtual machine language and it can already be done to a degree with runtime bytecode engineering. It’s just not a native part of the language because it’s too easily abused.
2006-02-20 7:28 pmjayson.knight
I don’t know about Java or C++, but you absolutely can do this in C# using either Reflection.Emit or the System.CodeDom namespace, and this functionality has been in the CLR since v1.0. For an excellent high level primer on this, have a look at this article:….
“…but you can also create objects that bend at runtime, creating new methods and member variables on the fly…”
I can’t wait to see the interesting vulnerabilities this capability will foster when used in conjunction with badly written PHP code LOL.
The security community has it’s research cut out for them.
-Viz
2006-02-20 8:55 pmAnonaMoose
Howdy
Javascript lets you do this kind of thing and it does not really have to lead to security problems.
Remember PHP code is ran on the server not on the client all the client sees is the result, so to add a new method or variable to a running instance would require hacking of the server then gaining access to the running container and instance in the container(container = zend engine etc).
Realistically you`d take over the server then run you own code or server.
2006-02-20 9:10 pmkamper
Javascript lets you do this kind of thing and it does not really have to lead to security problems.
The idea of a security vulnerability due to sloppy javascript writing is kind of dumb. The language, by the nature of where it runs, is simply the least secure thing imaginable. The reason javascript mistakes don’t matter is because javascript should never (ever) be touching sensitive data anyways.
so to add a new method or variable to a running instance would require hacking of the server
No, the point is that a programmer can accidentally add stuff they didn’t mean to and this would allow a cracker to gain access.
2006-02-21 11:54 amdruiloor
> The idea of a security vulnerability due to sloppy
> javascript writing is kind of dumb. The language, by
> the nature of where it runs, is simply the least secure
> thing imaginable. The reason javascript mistakes don’t
> matter is because javascript should never (ever) be
> touching sensitive data anyways.
Right… It’s just a scripting language, the fact it started off at Netscape as a web-browser feature doesn’t mean it can only be used for that:
Comparing it to the Apache mod_php one may have a look at the old mod_js or more recently mod_whitebeam or something –
2006-02-22 8:47 amkamper
Right… It’s just a scripting language, the fact it started off at Netscape as a web-browser feature doesn’t mean it can only be used for that: …
Oh, granted. Along with vbscript, it was also the syntax for asp and I believe it can still be used interchangeably with vbs for general windows scripting.
In my post I was referring to javascript as the interpreters/object models that exist within web browsers. I just figured it was obvious enough that I din’t need to bother specifying :-p
2006-02-22 10:21 amdruiloor
> Along with vbscript, it was also the syntax for asp and
> I believe it can still be used interchangeably with vbs
> for general windows scripting.
Although i don’t use MS-Windows , i’m pretty sure you’re correct. However KDE[0] (kjscmd) and Gnome[1] (mjs) have similar functionality. AKA ECMAScript[2], its probably stock platform agnostic to a greater extent then PHP is.
[0]:
[1]:
[2]:…
2006-02-22 6:06 pmkamper
its probably stock platform agnostic to a greater extent then PHP is.
What’s your point? As I said, I wasn’t talking about abstract javascript and all the places it can be applied. I was talking specifically (and only) about scripting within webpages in a browser. Stop trying to add irrelevant things to the discussion.
2006-02-21 12:36 amAnonaMoose
The idea of a security vulnerability due to sloppy javascript writing is kind of dumb.
Exactly!, although I`m not saying it might not ever happen but it does not need to be a security nightmare with enough thought on the implementation.
No, the point is that a programmer can accidentally add stuff they didn’t mean to and this would allow a cracker to gain access.
I fail to see why adding a method at runtime leads to this, bad programing is bad programing and if they cannot account for what they add and why then their static code would be questionable aswell.
The sheer amount of code may lead to risks but the idiom behind it does not need to.
Sorry, but that PHP is a server-sided language IS the problem – if a PHP developer makes mistakes, it allows hackers (via GET or POST data) to run malicious code on the server like getting passwords or other sensitive data. This is called PHP or SQL Injection.
Failure to validate inputs is called stupidity, GET/POST buffer overflow is a little different but I seriously fail to see why the abilty to dynamicaly add a method suddenly makes this more easily happen if you can tell me an example of this I`ll happily listen.
2006-02-21 4:17 amkamper
if you can tell me an example of this I`ll happily listen.
You have some user input that you’re going to inject into a sql query, a search term or something. You have a function that validates the input and returns a boolean and you rely on this function to make sure the input isn’t malicious. But you accidentally pass $serach_term to the function, which somehow results in a pass while $search_term (what you really wanted to validate) is malicious. You then proceed to build the query using the correct but malicious search term.
Sure, it’s a contrived example and lots of common sense things could prevent this, but the point is that any time your code starts doing something you didn’t expect it to, you have zero guarantee that it’ll be safe.
2006-02-21 6:02 amkamper).
2006-02-21 7:04 amShane.
2006-02-21 9:49 amAnonaMoose.
2006-02-21 12:25 pmmadhatter
I didn’t want to say anything against the dynamic of PHP,
just wanted to say you don’t really need to hack the (whole) server.
Additionaly it’s not that easy to avoid Injection, projects like phpBB have/had a lot of security problems because of Injetion.
P.S.: I’m not against PHP or something – in fact I like and use PHP but PHP has also it’s disadvantages
I was a PHP programmer for a few years, but now I just find it boring and tedious.
I converted to Ruby 6 months ago, its so delicious. Its like C++, Smalltalk & Perl rolled up into this candy bar that is almost perfect. (nothing is perfect)
The rails development just blew me away, and I think PHP will find it hard to mimick a similar framework based purely on their design of the OO methodology.
I won’t go into details as to why, you’ll just have to take a look and see why PHP is so old school.
Checkout and view the 15 minute demo on building a weblog. It doesn’t teach you how powerful Ruby is, it will illustrate the power of rails which sits on top of Ruby.
PHP is not even close to this functionality and its what ultimately sold me to be a complete convert.
2006-02-22 12:51 amsirwally
“I converted to Ruby 6 months ago, its so delicious. Its like C++, Smalltalk & Perl rolled up into this candy bar that is almost perfect. (nothing is perfect)”
I’m sorry, my brain is stuck in a infinite loop repeating “& Perl” intejected with the word “Perfect”. I guess it’s not used to seeing those two words in the same sentence. 😉
I’ll stop trolling/cracking lame jokes now, for this is about PHP, not Perl, although both languages do lend themselves to being horribly tortured by programmers (or are they torturing the programmers, I’m not sure).
FWIW, I wrote PHP & Perl for a number of years. I’m glad those years are far behind me.
Long live C# 😉 … at least until something better come along.
Which hosting providers provide these languages?
I found “dreamhost” for Ruby and PHP5.
I currently use GoDaddy and PHP4. I asked them about the availability of PHP5 and got the “I don’t know when” answer.
PHP4 is okay and I created a nice web architecture in it, but I wouldn’t mind investing either in PHP5 or a more powerful language.
I just need a place to host the websites for a newer PHP or alternative language.
I could just about predicted that a new RoR convert would have to post here in this thread and tell us all that RoR has changed their lives. Newsflash: yes, we’ve all heard about Rails by now. Some of us have also used it too. It’s getting a bit to the point of how much linux gets mentioned in threads about other OSes.
Since Rails *has* been mentioned though, I’ll suggest that people have a look at CakePHP (cakephp.org) if they want to have a similar framework, but one built on top of PHP. CakePHP was inspired by Rails, but the developers are evolving it separately. The framework is pretty young and things are still changing rapidly but I found that the framework is useable already.
There are other Rails-like PHP frameworks around, but I haven’t tried them yet. Maybe someone could give us a comparison?
I’ve been playing with PHP for about 2 or 3 years now. In comparison to the other big boys out there, I find PHP just too, too loose! For example, declaring variables. PHP allows you to create variables almost everywhere inside the code. For me, this is a no-no, force people to declare their variables and specify a type. Makes poor programmers.
Anyone else?
|
https://www.osnews.com/story/13738/going-dynamic-with-php-v5/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Hi, I wrote a java code to upload the files into the HCP namespace folder. I am also able to retrieve the objects from the folders using my java code. But I don't know how to get the list of contents in a HCP namespace folder using java. Is there any better way to query the list of contents of the HCP namespace folders and get the list of file names from the folder??
|
https://community.hitachivantara.com/thread/14143-hi-i-wrote-a-java-code-to-upload-the-files-into-the-hcp-namespace-folder-i-am-also-able-to-retrieve-the-objects-from-the-folders-using-my-java-code-but-i-dont-know-how-to-get-the-list-of-contents-in-a-hcp-namespace-folder-using-java-is-there-any-bett
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Entity Framework
Sentry has an integration with
EntityFramework through the of the Sentry.EntityFramework NuGet package.
Looking for
EntityFramework Core? That integration is achieved through the Sentry.Extensions.Logging package.
Installation
Using package manager:
Install-Package Sentry.EntityFramework -Version 1.0.0
Or using the .NET Core CLI:
dotnet add package Sentry.EntityFramework -v 1.0.0
This package extends
Sentry main SDK. That means that besides the EF features, through this package you’ll also get access to all API and features available in the main
Sentry SDK.
Features
- Queries as breadcrumbs
- Validation errors
All queries executed are added as breadcrumbs and are sent with any event which happens on the same scope. Besides that, validation errors are also included as
Extra.
Configuration
There are 2 steps to adding Entity Framework 6 support to your project:
- Call
SentryDatabaseLogging.UseBreadcrumbs()to either your application’s startup method, or into a static constructor inside your Entity Framework object. Make sure you only call this method once! This will add the interceptor to Entity Framework to log database queries.
- When initializing the SDK, call the extension method
AddEntityFramework()on
SentryOptions. This will register all error processors to extract extra data, such as validation errors, from the exceptions thrown by Entity Framework.
Example configuration
For example, configuring an ASP.NET app with global.asax:
public class MvcApplication : System.Web.HttpApplication { private IDisposable _sentrySdk; protected void Application_Start() { // We add the query logging here so multiple DbContexts in the same project are supported SentryDatabaseLogging.UseBreadcrumbs(); _sentrySdk = SentrySdk.Init(o => { // We store the DSN inside Web.config o.Dsn = new Dsn(ConfigurationManager.AppSettings["SentryDsn"]); // Add the EntityFramework integration o.AddEntityFramework(); }); } // Global error catcher protected void Application_Error() { var exception = Server.GetLastError(); SentrySdk.CaptureException(exception); } public override void Dispose() { _sentrySdk.Dispose(); base.Dispose(); } }
Sample
Check out a complete working sample to see it in action.
|
https://docs.sentry.io/platforms/dotnet/entityframework/
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Small library for shutting down and starting up local server
Project description
This library handles rather specific remote power management scenario.
- Server can be woken up with wake-on-lan (WOL)
- SSH access with public key authentication to the server (for shutting it down)
- Server responds to ping (checking whether the server is running)
Installation:
pip install manage_server_power
or clone the repository and use
python setup.py install
Configuration and notes
On the server side:
- Enable WOL (from BIOS/… settings).
- Take note of relevant MAC address.
- Add new user to server (say, powermanagement).
- Edit sudoers (visudo) and add powermanagement ALL=NOPASSWD: /sbin/poweroff
On the management computer:
- Generate ssh public key pair (ssh-keygen)
- Copy public key to powermanagement user on server (add it to .ssh/authorized_keys)
- Connect to server to check that public key works properly and to add server host key to known_hosts.
Notes:
- At least with some devices/networks, WOL won’t work if broadcast_ip is not set to local network’s broadcast, instead of 255.255.255.255.
Usage
import manage_server_power sp = manage_server_power.ServerPower(server_hostname="example.com", server_mac="61:a3:18:1c:84:4b", server_port=22, # optional, default is 22 ssh_username="powermanagement", broadcast_ip="192.168.1.255", # optional, default is 255.255.255.255 socket_timeout=0.5, # optional wol_port=9, # optional, default is 9 ) print sp.is_alive() # SERVER_DOWN, SERVER_UP or SERVER_UP_NOT_RESPONDING print sp.wake_up() # send WOL packet print sp.shutdown() # ssh in and run "sudo poweroff"
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/manage_server_power/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
US20060084410A1 - Flexible billing architecture - Google PatentsFlexible billing architecture Download PDF
Info
- Publication number
- US20060084410A1US20060084410A1 US11254327 US25432705A US2006084410A1 US 20060084410 A1 US20060084410 A1 US 20060084410A1 US 11254327 US11254327 US 11254327 US 25432705 A US25432705 A US 25432705A US 2006084410 A1 US2006084410 A1 US 2006084410A1
- Authority
- US
- Grant status
- Application
- Patent type
-
- Prior art keywords
- events
- event
- billing
- service
- mobile
A flexible billing system captures communication events on a more granular level then current communication systems. The captured communication events can services are only enabled for billable entities..
FIG. 1is a block diagram of a communication system that implements a flexible billing system. FIG. 2is a flow diagram describing how events can be formatted and aggregated for different operator requirements. FIG. 3is a block diagram showing how transaction events are tracked by a central billing manager. FIG. 4is a flow diagram showing how captured events can be aggregated over different dimensions. FIG. 5is a block diagram showing how different services and different associated events are tracked by a management server. FIG. 6is a block diagram showing how the flexible billing system can uniquely identify different users and services. FIG. 7shows a sample event report generated by the flexible billing system. FIG. 8shows a sample event table that can be used in the flexible billing system. FIG. 1shows an example of a mobile text communication network 12 that may operate similarly to the networks described in U.S. patent application Ser. No. 10/339,368 entitled: CONNECTION ARCHITECTURE FOR A MOBILE NETWORK, filed Jan. 8, 2003, and U.S. patent application Ser. No. 10/339,368 entitled: SECURE TRANSPORT FOR MOBILE COMMUNICATION NETWORK, filed Jan. 8, 2003, which are both herein incorporated by reference.
-
FIG. 1, the enterprise network 18 can include an enterprise server 30 that may contain a user mailbox 33 accessible by a Personal Computer (PC) 34. In one example, the enterprise server 30 may be a Microsoft® Exchange® server and the PC 34 may access the mailbox 33 through a Microsoft® Outlook® software application. The mailbox 33 and enterprise server 30 may contain emails, contact lists, calendars, tasks, notes, files, or any other type of data or electronic document that may be accessed by mobile device 21 or personal computer 22. An enterprise client 32 operated in enterprise server 30 operates as a connector for communicating with management server 20.
-
FIG. 8. The event table 35 operates as a filter to identify what attributes are detected for different events by the billing system.
-
FIG. 8will be described in more detail below. FIG. 2shows in more detail how the billing manager 26 in management server 20 may aggregate and format captured events data 42. In operation 50, the billing manager and/or the clients 32 or 38 in enterprise network 18, extract event data during communication activities between the mobile device 21 or remote PC 22 and the enterprise network 18. The extracted event data is output as a raw event stream in operation 52. The raw event stream may be converted into a format and delivery protocol required by the operator in operation 54. For example, the network operator may require the raw event stream to be formatted in a particular database format and then delivered to an operator billing server via an Internet transaction using a File Transport Protocol (FTP).
- 27 in
FIG. 1may provide a generic billing adapter that generates billing records, reports, session logs, or audit logs. The generic billing adapter abstracts specific billing format and transmission requirements. The extensible framework in FIG. 2facilitates billing integration with a large variety of different mobile network operators. Industry-standard reporting tools, such as Crystal Reports, may be integrated with the captured event data to provide mobile operators with familiar interfaces and formats.
- The billing manager 26 in
FIG. 1can generate a standard set of reports based upon the aggregated event data. This enables operators to have quick and easy access to service and usage data. For example, the billing manger 26 can identify the total requests made by mobile device 21, by service, and by time. User sessions and an average duration of the user sessions can be identified by device, by month provisioned, and by time. Billing reports can also identify the number of requests by type of request, by device, by month provisioned, or by time. Billing reports can also identify provisioned and active users, by month provisioned and by time. Session logs or audio logs can also be generated by date range.
-
FIG. 3shows in more detail how the billing manager 26 can track specific events in user transactions 70 and 72. This is described in more detail in U.S. patent application Ser. No. 10/339,368 entitled: CONNECTION ARCHITECTURE FOR A MOBILE NETWORK, filed Jan. 8, 2003, and U.S. patent application Ser. No. 10/339,368 entitled: SECURE TRANSPORT FOR MOBILE COMMUNICATION NETWORK, filed Jan. 8, 2003, which have both already been incorporated by reference.
-,368
FIG. 1. Accordingly, the Enterprise Client (connector) 32 can identify the decrypted events received from and sent to mobile device 21 and then capture the events or event attributes that correspond to the items flagged in event table 35. The personal client 38 in FIG. 1can operate in a similar manner.
-
FIG. 3are just examples of the many different types of events that can be sent, received, initiated, or associated with mobile device 21. Some, additional examples of mobile device events and event parameters that may be detected by the billing manager 26 or the enterprise client 32 are described below.
- (
FIG. 3). For example, creating, suspending, reactivating, or deleting a user account or changing a user password. Similar information may be captured and recorded by the billing manager 26 for transactions associated with Internet Service Provider (ISP) accounts, such as suspending, reactivating, deleting, reconfiguring, etc., ISP accounts.
-
FIG. 3. The mail activities can also include events associated with viewing or manipulating email folders.
-
FIG. 2to extract the underlying aggregated event data and format it for transmission to one or more billing data collectors. FIG. 4shows some of the different types of aggregation that allow the billing system to scale to a larger number of events per day. Aggregated, or not, the event data can be organized in multiple different dimensions. In this example, the event data is organized into any combination of user 82, service 84, and time 80 dimensions. Event counts are represented as measures, and the lowest-level at which event data is recorded in the standard aggregated view is per-session.
-.
- Billing Models
- Through the use of billing adapters and access to aggregated and un-aggregated event data, the billing manager 26 can support any combination of billing models, including service-based; event-based; time-based; and session-based.
- Service Based Billing
FIG. 5shows one example of how service-based billing records may be generated to provide the operator information necessary to bill users on a periodic basis for subscribed services. In order to facilitate this, the billing manager 26 may utilize the following captured data:
-
FIG. 6, the mobile device 90A is configured to operate with the email service 92B provided by ISP 96. The user of mobile device 90A may send an email read request 110 via management server 20 to ISP 96. The billing manger 26 identifies particular parameters associated with the email read request 110 that associate the request 110 with a particular user and with a particular service.
-
FIG. 3and provides the information necessary to bill subscribers on a periodic basis for each chargeable action. The billing manager 26 may use any of the event categories described above or described in FIG. 8to extract any combination of event data. Examples of potentially billable events include access to content (e.g. mail, contacts) from a particular ISP, voice call initiated from a contact lookup; email message retrieved; email message sent. Event-based billing records typically include actions that the operator has identified as billable, and may exclude other events that are not being billed.
-.
FIG. 7shows an example where an operator utilizes event-based billing on top of aggregation by session with the event data formatted into an IPDR/File standard. The billing record in FIG. 7has been generated for a time period containing activity for two users, “juser” and “sammy7”. The IPDRDoc metadata is included, showing a total count of events contained in this IPDRDoc, namespace and schema definitions, document ID, and creation time.
-
FIG. 7are a subset of the events tracked by the billing manager 26. In this example, the operator may have configured billing manger 26 to charge users based upon the frequency of actions during each user session. Each billable event has been given an easily identifiable name. For example, an event associated with viewing a mail folder “mailFolderViews”, delivering a message “messagesDelivered”; sending a message “messagesSent”; messages sent with attachments “messagesSentWithAttachments”; and viewing the attachment “attachmentsViewed”. More compact representations are also possible if data volume is a concern.
- Categorizing Captured Events
FIG. 8shows in more detail the event table 35 previously referred to in FIGS. 1 and 3. The event table 35 allows events and/or event attributes to be quickly and flexibility categorized into service and non-service events. Service events correspond with direct end user actions and non-service events correspond with administrator generated actions, such as an event generated by the communication management system 16 ( FIG. 1). As described above, there are extra attributes that can be tracked for both service and non-service events. For example, a timestamp may be used to indicate when the action causing the event occurred or how long a user accessed a service.
- Referring back to
FIG. 3, in the case of sync messages, the timestamp may refer to the time returned by the connector (enterprise client 32) for the event. For example, enterprise client 38 in the enterprise network 18 in FIG. 1may generate the event times corresponding with the sync message 70D sent by mobile device 21. Alternatively, the event times may be generated by the management server 20 upon receiving the sync message or the response transaction 77. When the sync message is end-to-end or sent to an ISP, the event time may be generated and tracked by the management server 20. Hence the time the event occurs may not necessarily be the actual time the user action was initiated.
- Extensibility
- As also described above, the billing system may be extended to operate outside of the communication management system 16. This may be necessary to support tracking of additional events requested by operators. For example, in
FIGS. 1 and 3, the enterprise client 32 in enterprise network 18 or the device client 23 in mobile network 14 may capture events where applicable and either independently generate billing records or send the captured events to billing manager 26 in management server 20 for supplementing billing data 44 or 46.
-.
Claims (33)
- 1. A communication management system, comprising:one or more processors identifying different communication events and parameters associated with a mobile device according to a configurable event tracking table, some of the one or more processors then combining the identified communication events and parameters into a report.
- 2. The communication management system according to
claim 1wherein the one or more processors are configurable through the event tacking table to identify service based events that correspond with direct mobile device actions or non-service events that correspond with administrator generated actions.
- 3. The communication management system according to
claim 2wherein the service base events include data access events used for viewing or manipulating email, calendar, appointment, or data files and the non-service events include activating or deactivating a mobile network access account.
- 4. The communication management system according to
claim 1wherein the one or more processors are configurable to categorize the identified events into event-based billing reports that identify a number of different text or voice access events initiated by the mobile device.
- 5. The communication management system according to
claim 4wherein the event-based billing reports identify different email, calendar, or contact viewing and manipulation operations.
- 6. The communication management system according to
claim 1wherein the one or more processors are configurable to categorize the identified events into time-based billing reports that identify when or how long the mobile device is connected to an enterprise network or service.
- 7. The communication management system according to
claim 1wherein the one or more processors are configurable to categorize the identified events into service-based billing reports that identify different services used by the mobile device.
- 8. The communication management system according to
claim 7wherein the one or more processors identify Internet Service Providers (ISPs) associated with some of the events and generate reports containing the events associated with the same identified ISPs.
- 9. The communication management system according to
claim 7wherein the one or more processors combine identified events associated with a same service identifier and a same user identifier into a same user billing report.
- 10. The communication management system according to
claim 1wherein the one or more processors are configured to categorize the identified events into session-based billing reports according to wireless sessions established by the mobile device.
- 11. The communication management system according to
claim 10wherein the session-based billing reports include a session start time, session end time, and session identifier for the identified sessions.
- 12. The communication management system according to
claim 1wherein the one or more processors aggregate the identified event data according to associated user identifiers, event identifiers, service identifiers, or session identifiers.
- 13. The communication management system according to
claim 1wherein:at least one of the processors is located in a central management server that forwards transaction requests from the mobile device to an enterprise network that sends transaction responses back to the mobile device; andat least one of the processors is located in the enterprise network and sends event identifiers for the transaction requests and transaction responses back to the central manager served for generating billing reports.
- 14. The communication management system according to
claim 13wherein the management server sends the event tracking table to the enterprise network and the enterprise network then identifies and sends event identifiers back to the central management server according to events and parameters identified in the event tracking table.
- 15. A method for tracking billing events, comprising:identifying communications associated with a mobile device;identifying different types of service related events in the communications associated with mobile device initiated communication events and non-service related events associated with system administration related events; andoutputting in real-time or in a batch mode billing information that identifies particular types of service related events and non-service related events in the communications.
- 16. The method according to
claim 15including identifying the identified service and non-service related events according to a configurable event table.
- 17. The method according to
claim 15including distinguishing between different types of user sessions, enterprise sessions, or service sessions in the non-service related events.
- 18. The method according to
claim 17including generating reports indicating a non-service session is added, deleted, suspended, activated, or reactivated.
- 19. The method according to
claim 15including distinguishing between different types of email, contacts, and calendar actions in the service related events.
- 20. The method according to
claim 19including identifying different viewing, deleting, adding and editing operations for the email, contacts, and calendar actions and generating a billing report that assigns different charges for some of the different identified operations.
- 21. The method according to
claim 20including identifying a number of viewing, deleting, adding and editing email, contacts, and calendar actions associated with the mobile device for a particular time period and including the identified number of actions and time period in the billing report.
- 22. The method according to
claim 15including:identifying an event that requests sending or viewing a file;identifying the size of the sent file associated with the event; andidentifying the send or view event and the size of the associated file in the billing report.
- 23. The method according to
claim 15including:identifying a file request event;identifying a type of file transferred pursuant to the file request event; andbilling for the file transfer according to the type of identified file.
- 24. The method according to
claim 15including categorizing events in the billing report according to a service type identifier and a user identifier associated with the events.
- 25. The method according to
claim 24including using a mobile device identifier, mobile device phone number or an IP address as the user identifier.
- 26. The method according to
claim 15including aggregating the service and non-service related events according to different user, service, or time categories.
- 27. The method according to
claim 15including generating a billing report that identifies a number of initiated mobile device sessions independently of how long the mobile device is connected in the sessions.
- 28. An event tracking system, comprising:a management server tracking events associated with wireless data communications between a wireless device and an enterprise network or between the wireless device and a Internet Service Provider (ISP), the management server configurable to track different configurable events at different configurable granularities and with different configurable associated parameters and then output the tracked events and parameters into a report.
- 29. The event tracking system according to
claim 28including an event table that identifies a list of available tracking events and tracking parameters, the management server generating a report containing the events and parameters enabled in the event table.
- 30. The event tracking system according to
claim 29wherein the management server sends the event table to the enterprise network or to the ISP which then tracks the events enabled in the event table and sends the tracked events back to the management server for generating the report.
- 31. The event tracking system according to
claim 28wherein the management server separately categorizes the events associated with the enterprise network and the ISP into different reports.
- 32. The event tracking system according to
claim 31wherein the management server separately categorizes events associated with different Internet Service Providers (ISPs).
- 33. The event tracking system according to
claim 28wherein the management server tracks different electronic mail (email), calendar, or contact manipulation activities and categorizes the different activities such that they may be used for billing these activities at variable rates.
|
https://patents.google.com/patent/US20060084410A1/en
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
FINRA has clarified that its Suitability Rule applies to broker dealers' assessment of EB-5 transactions, but that the broker can and should assess the immigration benefit among the economic interests of the investor.
By letter dated August 26, 2013, FINRA issued guidance as to whether registered broker-dealers are required to comply with FINRA Rule 2111 in connection with private placements of securities in EB-5 Program transactions. Rule 2111, commonly referred to as the Suitability Rule, requires broker-dealers to have a reasonable basis to believe that a recommended securities transaction is suitable for a customer. This rule requires a broker-dealer to conduct a certain amount of diligence on the issuer to confirm that the securities may be suitable for some investors, and to evaluate each investor's risk profile to confirm that the securities are suitable for a particular investor.
FINRA issued the guidance in response to a letter from Trustmont Financial Group, Inc. requesting that FINRA take the position that the Suitability Rule shouldn't apply since the primary motivation for investors in EB-5 Securities Transactions is not to obtain investment returns, but to obtain the right of residence in the United States. FINRA disagreed with Trustmont's position, stating that EB-5 Securities Transactions are essentially the same as other private offering transactions.
After clearly stating that the Suitability Rule does apply in connection with EB-5 securities offerings, FINRA provided specific guidance as to how broker-dealers should take into account aspects of the EB-5 Program when determining whether an investment is suitable. For example, FINRA expects broker-dealers to conduct a reasonable investigation as to whether the proposed transaction complies with EB-5 Program requirements, such as whether the proposed investment is a qualifying project that will create the requisite number of jobs for U.S. workers. FINRA clarified that in evaluating whether the investment would be suitable for a particular investor, the broker-dealer may consider the investor's motive to obtain U.S. residency as a factor in the analysis. FINRA's letter essentially implies that a broker dealer can balance the EB-5 investor's expected financial return (which typically is lower than in non-EB-5 offerings) with the immigration benefit, but then some due diligence about the immigration benefit is required.
FINRA's guidance impacts both issuers of securities in EB-5 transactions as well as brokerdealers engaged to assist participants in these transactions. For issuers and sponsors of EB-5 Projects, engaging a broker-dealer may help with the diligence expectations of foreign investors and their agents, as well as impose a certain discipline in the offering process. Of course, this assistance comes with a price in the form of the fees charged by the broker-dealer. Prior to accepting an engagement for an EB-5 securities offering, broker-dealers should consider the unique aspects of the EB-5 Program, and whether it is prepared to conduct the necessary immigration-related diligence and investigation necessary to meet FINRA's expectations with respect to the Suitability Rule.
|
https://www.lexology.com/library/detail.aspx?g=21b00f24-d1d1-4d83-ac1d-434d010c2ec5
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
LESENSE_TimeCtrlDesc_TypeDef Struct ReferenceEMLIB > LESENSE
LESENSE timing control descriptor structure.
Definition at line
548 of file
em_lesense.h.
#include <
em_lesense.h>
Field Documentation
Set to true do delay the startup of AUXHFRCO until the system enters the excite phase. This will reduce the time AUXHFRCO is enabled and reduce power usage.
Definition at line
557 of file
em_lesense.h.
Referenced by LESENSE_Init().
Set the number of LFACLK cycles to delay sensor interaction on each channel. Valid range: 0-3 (2 bit).
Definition at line
551 of file
em_lesense.h.
Referenced by LESENSE_Init().
The documentation for this struct was generated from the following file:
- C:/repos/embsw_super_h1/platform/emlib/inc/
em_lesense.h
|
https://docs.silabs.com/mcu/5.4/efr32bg14/structLESENSE-TimeCtrlDesc-TypeDef
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Naming
"To call things by incorrect names is to add to the world's misery."
This page documents the naming rules for various things defined in EXPath specifications. The rules are organized by the nature of those things to be named.
Namespaces
A namespace is identified by a URI. The namespace associated to a
specification is of the form{module},
where
module must be replaced by the specification
short code. The same short code is used in the address of the
specification document itself. It must consist only of lower case latin
letters (from a to z) and of dashes (dashes cannot appear at
either end of the code, and there cannot be two consecutive dashes).
A short code must be at least 3 letters-long.
For instance, the namespace URI for the HTTP Client module is.
In addition to a short code, a specification will define a prefix bound to the namespace (this is of course a convention, as the namespace mechanism fortunately guarantee that the user can chose whatever prefix is best suited). That prefix is used consistently across the specification every time a lexical QName refers to that namespace (if a specification refers to a QName defined in another specification, it must use the same prefix as in the other specification). The namespace prefixes must be unique among all EXPath specifications. The prefix might be the same of the specification short code, but does not have to.
For instance, the namespace prefix for the HTTP Client module is
hc.
Functions
Function names must be valid QNames (as defined by XPath). The namespace URI and prefix are defined at the specification level, as documented below. The function local name must follow the following rules (in addition to being a valid NCName, indeed):
- contain only lower case latin letters (from a to z) and dashes
- dashes cannot appear as the first nor last character
- there cannot be two consecutive dashes
- digits are tolerated if needed (e.g. in
base64)
Each simple word is separated by a dash. The only exception where
upper case latin letters can appear (from A to Z) is when the
domain already contains such names (for instance the XML Schema type
xs:hexBinary). This is discouraged though.
Within the specification (this is editorial) the function prototypes must look like the following:
p:some-name($arg as xs:integer) as element(p:info)? p:some-name($arg as xs:integer, $new-name as xs:string) as element(p:info)?
The several arities of the same function are given all at once (the
lower arity first). Parameters are given very short names (they must
be defined in the text right after anyway, do not try to put all the
semantics in the parameter name, just enough to distinguish it fro the
other parameters). The same parameters in two different arities must
be given the same name. The first parameter appears right after the
opening
(, the second parameter in a new line (its
$ symbol must be right before the first one). The third
parameter appears on the third line, with the same indentation, and so
on. All the type clauses (all the
as keywords) appear
below each other. The line of the last parameter ends with the
closing
) and the function type declaration.
Error codes
Error codes follow the same naming conventions as the function names (lower case latin letters, names separated with dashes, digits tolerated). As functions, the error codes are in the specification namespace (there are not in a specific error namespace).
Error codes should be kept short, though carying enough information about the error conditions to the reader. The entire definition does not have to be contained in the name, but it must be expressive enough to intuitively distinguish it from other error conditions.
Using number codes (like XPath, XQuery, XSLT and F&O specifications themselves do) is deprecated.
Specifications
The URL where a specification is published can be of several forms:{code}{code}/{X.Y}{code}/{YYYYMMDD}{code}/editor
where:
codeis the short code, like
http-client
X.Yis the version number, like
1.0
YYYYMMDDis the publication date, like
20130818
In addition, when it is applicable, the URL can end with:
/difffor a version where differences with the previous version are highlighted
.xmlfor the original XML source for the specification
The editor copy (the URL ending with
/editor) is the
latest version the editor is working on currently. This is useful in
order for the editor(s) to share the current state of their own copy,
without requiring them to actually publish a draft. The typical use
case is to ask on the mailing list, after having integrated changes
discussed in a long email chain, whether everyone is happy with the
wording. This is not really a published document.
A draft is published at some point in time, and must be in a consitent state (some sections can be incomplete though). A draft is a public document, open to comments by anyone.
Once a specification is approved, it is assigned a version number.
The first one is
1.0, then
2.0, and so on.
A sub-version can be created if the editor and the Community Group
feel it is the right thing (like
2.1).
The latest verison of a specification (the shortest URL, ending with the short code only) refers to the latest version of the specification if there is any. If it has not been approved yet, then it refers to the latest (pre-1.0) draft.
Examples of URLs for the http-client:: the latest version: the version 1.0: the version 1.0 as XML: the draft from Aug. 18, 2013: the draft from Aug. 18, 2013, with differences with the previous draft highlighted
Notes
The quote in the introduction of this page is a translation from French: "Mal nommer les choses c'est ajouter au malheur du monde.", in Brice Parain, "Sur une philosophie de l'expression".
|
https://www.w3.org/community/expath/wiki/Naming
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
read the columns of worksheet and set any data in the colums named with "Date" in the title to format "dd/mm/yyyy"
By
joeloyzaga, in AutoIt General Help and Support
Recommended Posts
Similar Content
- By ahha
I AnonymousX
Hello,
I'm trying to be able to switch back and forth between multiple excel spreadsheets and I can't seem to get the WinActivate function to work, and bring the desired window the be the active window.
Could I please get some assistance, I've tried a few things and nothing seems to work quite right. Below is a test case where I'm just trying to make the first excel sheet that was opened become the active window, and testing it by grabbing a cell value off that workbook. The message box produces the correct answer if both files are closed before running but the 2nd test file will appear to be the active window. If the code is run again without closing the excel files, nothing works (file does not appear to be active and message box will not give an answer).
#include <Excel.au3> Opt("WinTitleMatchMode", 2) ;1=start, 2=subStr, 3=exact, 4=advanced, -1 to -4=Nocase ;Open Test1 Excel Workbook local $oExcel = _Excel_open() Local $ofile = @ScriptDir & "\test1.xlsx" Local $oWorkbook = _Excel_BookOpen($oExcel,$ofile) ;Open Test2 Excel Workbook local $mExcel = _Excel_open() Local $mfile = @ScriptDir & "\test2.xlsx" Local $mWorkbook = _Excel_BookOpen($mExcel,$mfile) ; This workbook is completely blank WinActivate($oWorkbook); should make Test1 the active window local $read1 = _Excel_RangeRead($oWorkbook,Default,"B2"); Cell B1 in Test1 workbook contains the word Test MsgBox(0,0,$read1);Should returns the word test
- By meral40
#include <Excel.au3> #include <MsgBoxConstants.au3> #include <Array.au3> ; Create application object and open an example workbook Local $var1= "D:\Documents\testbook.xls" Local $oExcel = _Excel_Open() Local $oWorkbook = _Excel_BookOpen($oExcel, $var1) Local $sRead = _Excel_RangeRead($oWorkbook, Default, "Q2") Local $sRead2 = _Excel_RangeRead($oWorkbook, Default, "Q2") $text1= "hello there" $text2= "read me" While 1=1 If $sRead = $text1 Then ;MouseClick Consolewrite($sRead) Elseif $sRead2 = $text2 Then ;MouseClick Consolewrite($sRead2) EndIf sleep(30000);reads field every 30s WEnd Ok I am writing a script in excel that monitors a field that changes every so often then creates an action based on whether it is text1 or text2 I have problem here if I run script it will read the right text but if I go edit the text in excel it still displays the text before the change.
Thanks for your help.
|
https://www.autoitscript.com/forum/topic/166167-read-the-columns-of-worksheet-and-set-any-data-in-the-colums-named-with-date-in-the-title-to-format-ddmmyyyy/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Usage¶
torchelastic requires you to implement a state object and a train_step function.
For details on what these are refer to how torch elastic works.
While going through the sections below, refer to the imagenet example
for more complete implementation details.
Implement state¶
The State object has two categories of methods that need to be implemented:
synchronization and persistence.
sync()¶
Lets take a look at synchronization first. The sync method is responsible for
ensuring that all workers get a consistent view of state. It is called at
startup as well as on each event that potentially leaves the workers out of sync,
for instance, on membership changes and rollback events. Torchelastic relies on
the sync() method for state recovery from surviving workers (e.g. when
there are membership changes, either due to worker failure or elasticity,
the new workers receive the most up-to-date state from one of the surviving
workers - usually the one that has the most recent state - we call this worker
the most tenured worker).
Things you should consider doing in sync are:
- Broadcasting global parameters/data from a particular worker (e.g. rank 0).
- (re)Initializing data loaders based on markers (e.g. last known start index).
- (re)Initializing the model.
> IMPORTANT: state.sync() is not meant for synchronizing steps in training. For instance
you should not be synchronizing weights (e.g .all-reduce model weights for synchronous SGD).
These type of collectives operations belong in the train_step.
All workers initially create the state object with the same constructor arguments.
We refer to this initial state as S_0 and assume that any worker is able to create
S*0 without needing any assistance from torchelastic. Essentially S*0 is the bootstrap
state. This concept will become important in the next sections when talking about
state persistence (rollbacks and checkpoints).
(optional) capture*snapshot() and apply*snapshot()¶
> You do not have to implement these methods if you do not want rollbacks
from failed train_steps
torchelastic has the ability to rollback a state if a train_step fails to
execute successfully, which may result in the state object being left partially
updated. It relies on a properly implemented capture*snapshot() and apply*snapshot()
methods of the state to ensure that the state is restored to before the
faulty train_step.
The capture_snapshot() method, as the name implies, takes a snapshot of the state
and returns the necessary information to be able to restore
the state object. You may return any object from capture_snapshot() so long as you
can use it in the apply_snapshot(snapshot) method. A possible implementation of a
rollback is:
snapshot = state.capture_snapshot()
try:
train_step(state)
except RuntimeError:
state.apply_snapshot(snapshot)
state.sync()
> NOTE: Since certain fields of the state may need to get re-initialized,
torchelastic calls the sync() method. For instance, data loaders may need
to be restarted as their iterators may end up in a corrupted state when the
train_step does not exit successfully.
Notice that the apply method is called on the existing state object, this implies
that an efficient implementation of snapshot should only return mutable, stateful
data. Immutable fields or fields that can be derived from other member variables or
restored in the sync method need not be included in the snapshot.
By default the capture*snapshot() method returns None and the apply*snapshot() method
is a pass, which essentially means “rollback not supported”.
> IMPORTANT: The apply_snapshot object should make no assumptions about
which state object it is called on (e.g. the values of the member variables).
That is, applying a snapshot
to any state followed by state.sync() should effectively restore the
state object to when the corresponding capture_snapshot method was called.
A good rule of thumb is that the apply_snapshot should act more like a set
method rather than an update method.
(optional) save(stream) and load(stream)¶
> You do not have to implement these methods if you do not plan on using
checkpointing.
Much like the capture*snapshot and apply*snapshot, the save and load methods form a pair.
They are responsible for persisting and restoring the state object to and from
a stream which is a file-like object
that is compatible with pytorch.save.
torchelastic relies on these methods to provide checkpoint functionality for your job.
> We encourage users to use torch.save and torch.load methods when implementing
save and load methods of their state class.
> NOTE: The default implementations of save and load use capture_snapshot
and apply_snapshot
Implement train_step¶
The train_step is a function that takes state as a single argument
and carries out a partition of the overall training job.
This is your unit of work and it is up to you to define what
a unit is. When deciding what your unit of work should be, keep in mind the
following:
- Rollbacks and checkpoints are done at train_step granularity. This means
that torchelastic can only recover to the last successful train_step Any failures
during the train_step are not recoverable.
- A train*step iteration in the train*loop has overhead due
to the work that goes in ensuring that your job is fault-tolerant and elastic.
How much overhead depends on your configurations for rollbacks and checkpoints as well
as how expensive your snapshot, apply, save and load functions are.
> In most cases, your job naturally lends itself to an
obvious train_step. The most canonical one for many training jobs is to map
the processing of a mini-batch of training data to a train_step.
There is a trade-off to be made between how much work you are
willing to lose versus how much overhead you want to pay for that security.
Write a main.py¶
Now that you have state and train_step implementations all that remains
is to bring everything together and implement a main that will execute your
training. Your script should initialize torchelastic’s coordinator, create
your state object, and call the train_loop. Below is a simple example:
import torchelastic
from torchelastic.p2p import CoordinatorP2P
if name == “main”:
min_workers = 1
max_workers = 1
run_id = 1234
etcd_endpoint = “localhost:2379”
state = MyState()
coordinator = CoordinatorP2P(
c10d_backend=”gloo”,
init_method=f”etcd://{etcd_endpoint}/{run_id}?min_workers={min_workers}&max_workers={max_workers}”,
max_num_trainers=max_workers,
process_group_timeout=60000,
)
torchelastic.train(coordinator, train_step, state)
Configuring¶
Metrics¶
See metrics documentation.
Checkpoint and Rollback¶
See checkpoint documentation
Rendezvous¶
See rendezvous documentation
|
https://pytorch.org/elastic/0.1.0rc2/usage.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
How to use authorization in Laravel: Gates, policies, roles and permissions
In a previous entry, I mentioned the importance of hand-in-hand authorization and authentication. Now, let’s talk about the many ways that Laravel provides to apply authorization to your application.
The Laravel documentation describes multiple tools to authorize access to your application. It goes into detail about creating, constructing, and applying these authorization mechanisms. However, it only gives light direction about which method is best to use in your application. That’s because each application is different, and the way you apply authorization can be subjective. One of the packages I describe later, Spatie’s Laravel Permission, also walks the same tightrope. They make sure to integrate with Laravel and provide robust features but generally hint at guidance.
So, how do you decide what authentication mechanism to apply? Do you use Laravel’s built-in tools, or must you install a third-party package to get the functionality you need?
This question is complicated, but we can work towards an answer. Let’s begin by examining what we have available to us.
Authorization tools available in a Laravel app
Laravel provides gates and policies right out of the box. You can read the authorization documentation for detailed implementation instructions. But let’s talk specifically about each and what they’re best used for.
Gates are more high-level and generic. While they can be applied to specific objects (or Eloquent models), they tend to be more business process-oriented or abstract. I like to think of this by picturing a gatekeeper. You have to get past the gatekeeper or bouncer to get into the club. Inside the club, that’s a whole different story as you interact with individuals. That’s more along the lines of policies.
Policies tend to match up nicely with the basic CRUD (Create, Read, Update, Delete) mechanics. By design, they provide a paradigm for these actions.
Policies tend to be applied to a resource, like an Eloquent model, to determine if the user can do one of the CRUD actions. They’re not limited to these actions, though. They can be expanded with custom actions and used to authorize relationships. (Laravel Nova uses registered policies to determine if you can do things like creating or attaching models to a relationship.)
Gates and policies offer a mix of fine-grain and abstract authorization, but they lack a hierarchy or grouping functionality. That’s where the Laravel-permission package by Spatie shines. It provides a layer that both Gates and Policies can take advantage of. This functionality includes roles, permissions and teams.
Permissions describe if a user can do an action in a general, non-specific sense. Roles describe the role a user may play in your application. Users can have multiple roles. Permissions are applied to roles. (Permissions can be applied to a user directly, but I’d advise against that.) Finally, and optionally, teams are groups of users. Those Teams can contain many roles.
Now you know what’s all available. See how this could get confusing? There seem to be many ways to solve the same problem if you’re looking at these definitions. Hopefully, things will clear up when we look at these in practice.
Laravel gate example
In this scenario, I want to ensure my user has gained at least 100 points to see the link for redemption. I’m storing the number of points directly on the
user model.
Let’s see what the gate might look like:
use App\Models\User;
use Illuminate\Support\Facades\Gate;
Gate::define('access-redemptions', function (User $user) {
return $user->points >= 100;
});
Now, let’s see what our navigation HTML in the Blade file may look like:
<nav>
<a href="{{ route('dashboard') }}">Dashboard</a>
@can('access-redemptions')
<a href="{{ route('redemptions') }}">Redemptions</a>
@endcan
</nav>
Here, we can see that we’re using Laravel’s Blade
@can directive to check the authorization of this action for the current user.
To complete the full check, we’d probably add something like this at the top of our method accessed for the
redemptions route:
use Illuminate\Support\Facades\Gate;
Gate::authorize('access-redemptions');
If the user did not have permission to
access-redemptions, an authorization exception would be thrown.
So, let’s break this down so we can tell why this was the perfect use for a gate:
- It is a generic action: “can we access a business process” is basically what the question is. Can I access redemptions? Well, only if you have 100 or more points.
- Even though it depends on the current user, an Eloquent model, it doesn’t apply to another model. We’re not checking some external resource or model for points. Instead, we’re looking at ourselves, what we know about our state, so that means it’s probably not something a policy would work with.
- We need to do a calculation, so roles and permissions are out.
They only provide a binary determination if an action is allowed or not.
Laravel policy example
To demonstrate a policy, let’s pick a simple example. I want to authorize only owners of a book to update it. Each book has a
user_id field that represents the owner.
Here’s what that policy class would look like:
namespace App\Policies;
use App\Models\Book;
use App\Models\User;
class BookPolicy
{
public function update(User $user, Book $book): bool
{
return $book->user_id === $user->id;
}
}
Now, we want to authorize our controller method. Normally I’d recommend using a resourceful controller with the authorizeResource() helper. But, let’s demonstrate this in a more verbose way by applying it directly in the
update() method of a
BookController.
namespace App\Http\Controllers;
class BookController extends Controller
{
public function update(Book $book, Request $request)
{
$this->authorize('update', $book);
// ... code to apply updates
}
}
The
BookController::authorize() method, or authorization helper, will pass the current user into the
BookPolicy::update() method along with the updated instance of the
$book. If the policy method returns
false, an authorization exception would be thrown.
Why is a Policy the chosen authorization tool? First, we are working with a specific type of action: we have a noun and want to do something. We have a book, and we want to update it in this case. Second, since it’s a specific Eloquent model, a Policy is the best tool to work with individual items. Finally, because this is a CRUD type action, and we’re already following the paradigm of naming methods after their action in the controller, that’s a great hint that we should be using the same method names in a policy named after that model.
Laravel role and permission example
To demonstrate the Role and Permission authorization tools, let’s think about an organization with departments. In that company, there is a sales team and a support team. The sales team can see client billing information but cannot change it. The Support team can see and update client billing information.
In order to accomplish this, I want two permissions and two roles. Let’s set them up:
use Spatie\Permission\Models\Role;
use Spatie\Permission\Models\Permission;
$sales = Role::create(['name' => 'sales']);
$support = Role::create(['name' => 'support']);
$seeClientBilling = Permission::create(['name' => 'see-client-billing']); $updateClientBilling = Permission::create(['name' => 'update-client-billing']);
$sales->givePermissionTo($seeClientBilling);
$support->givePermissionTo($seeClientBilling);
$support->givePermissionTo($updateClientBilling);
We’ve registered two roles and applied the appropriate permissions to each role. Now, users who have these roles will inherit those permissions as well.
Now, let’s see a few methods in our billing controller.
namespace App\Http\Controllers;
use App\Models\Client;
use Illuminate\Http\Request;
class ClientBillingController extends Controller
{
public function show(Client $client, Request $request)
{
abort_unless($request->user()->can('see-client-billing'), 403);
return view('client.billing', ['client' => $client]);
}
public function update(Client $client, Request $request)
{
abort_unless($request->user()->can('update-client-billing'), 403);
// code to update billing information
}
}
Now, if a user visits
ClientBillingController::show() with either role of
sales or support
they will have access to see the billing information. Only users with the role
support, which gives them permission to
update-client-billing, will submit to the
update() method.
Why are Roles and Permissions the right authorization choice? You could accomplish the same sort of thing with Gates or, to some extent, policies. But, roles and permissions make it easier to understand and apply the permission approach in only one location. Let’s say in the future you want Sales to be able to update Client billing information as well: you’d only have to add the
update-client-billing permission to the
sales role. One quick change. You wouldn’t have to check various gates or track down policies. This type of action, which is not necessarily unique to a specific model but provides levels of access or authorization, makes roles and permissions the perfect tool.
TLDR; which authorization mechanism should I use?
Gates are used for specific functionality outside the standard CRUD mechanisms. And, they’re great for vast and sweeping access to whole sections or modules. Policies work best with the CRUD paradigm, authorizing specific objects or eloquent models. Roles and policies work well when each group or department needs to accomplish specific actions. Functionally, there is a lot of overlap for each tool, so you might find yourself mixing and matching. Also, you may integrate one inside another (like permission checking combined with ownership verification in a policy).
Uh-oh!
We've encountered a new and totally unexpected error.
Get instant boot camp pricing
Thank you!
A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.
|
https://resources.infosecinstitute.com/topic/how-to-use-authorization-in-laravel-gates-policies-roles-and-permissions/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
You already might have noticed, as part of vSphere 6.5, VMware introduced vCenter Server REST APIs. I really enjoyed playing around them using vCenter apiexplorer as well as Postman REST client. Recently, I wanted to code around these APIs using one of the programming languages and I am happy that I was able to do it using Python. I thought it is worth to share with you. In this blog post, I will take you through all the steps required to get started with vCenter REST API using python. Here we go.
Step 1. First important thing is to get familiar with vCenter server REST API documentation. Similar documentation is available from vCenter apiexplorer as well. I would recommend you to play with apiexplorer, which will not only make you familiar with documentation but also will enable you to quickly invoke these APIs against your vCenter server. install “requests” python module as follows
$ pip install requests
Step 3. Now let us take a look at below python module developed to simplify REST API usage.
[python]
# Author: Vikas Shitole
# Website:
# Product: vCenter server
# Description: Python module for vCenter server REST APIs
# Reference:
# How to setup vCenter REST API environment?: Just have VM with python and install "requests" python library using pip
import requests
import json
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
s=requests.Session()
s.verify=False
# Function to get the vCenter server session
def get_vc_session(vcip,username,password):
s.post(‘https://’+vcip+’/rest/com/vmware/cis/session’,auth=(username,password))
return s
# Function to get all the VMs from vCenter inventory
def get_vms(vcip):
vms=s.get(‘https://’+vcip+’/rest/vcenter/vm’)
return vms
#Function to power on particular VM
def poweron_vm(vmmoid,vcip):
s.post(‘https://’+vcip+’/rest/vcenter/vm/’+vmmoid+’/power/start’)
# Function to power off particular VM
def poweroff_vm(vmmoid,vcip):
s.post(‘https://’+vcip+’/rest/vcenter/vm/’+vmmoid+’/power/stop’)
[/python]
Above vcrest.py module is available on my github repo.
Let us understand above code.
Line 8: Imported powerful “requests” python library required to make API calls
Line 9: Imported “json” library required to parse json response we get from REST APIs
Line 10/11: Here we are disabling warnings related to SSL connection. In production, we should not disable it.
Line 13/14: Here we are creating Session object to have session persisted during the current request. If you see “s.verify” is set to False, it does mean that we are ignoring verifying SSL certificates. If you want to set it to true, please take a look at SSL Cert Verification section
Line 16 to 32: I have added 4 methods i.e. get_vc_session(), get_vms(), poweron_vm() & poweroff_vm(). We would be calling these methods from below sample script. If you see, in all the methods, I have used REST API documentation and called these APIs using “requests” library.
Step 4. Now that we understood above “vcrest.py” module, let us import above module into our script to demonstrate its usage.
[python]
# Description: Python sample to get VMs and its moid using vCenter server REST API.
# Reference:
# Make sure you have "rest.py" file into your python directory.
import vcrest
import json
vcip="10.192.23.143" # vCenter server ip address/FQDN
#Get vCenter server session and can be used as needed. pass vcenter username & password
vcsession = vcrest.get_vc_session(vcip,"Administrator@vsphere.local","VMware1!")
#Get all the VMs from inventory using below method from "vcrest" module.
vms = vcrest.get_vms(vcip)
# Parsing the JSON response we got from above function call (it has all the Vms present in inventory
vm_response=json.loads(vms.text)
json_data=vm_response["value"]
print "VM names and its unique MOID"
print "============================"
for vm in json_data:
print vm.get("name")+" :: "+vm.get("vm")
#We are powering on all the VMs those are in powered off state
if vm.get("power_state") == "POWERED_OFF":
vcrest.poweron_vm(vm.get("vm"),vcip)
[/python]
Above script i.e. vcrestsample.py is available on my github repo as well
Output :
vmware@localhost:~$ python vcrestsample.py
VM names and its unique MOID
============================
NTP-India-1 :: vm-42
NTP-PA-2 :: vm-43
WebApp-1 :: vm-44
vThinkBeyondVM :: vm-45
vmware@localhost:~$
Let us understand above script.
Line 5: Imported “vcrest” module we just discussed above.
Line 10: We are getting vCenter server session by calling function defined in “vcrest” module. We can use this session object as needed.
Line 13: We are getting all the VMs from inventory using “get_vms() function defined in “vcrest” module. Note that with this call, we will get JSON response as shown below, which we need to parse to fetch useful information.
[python]
{
"value": [
{
"memory_size_MiB": 512,
"vm": "vm-42",
"name": "NTP-India-1",
"power_state": "POWERED_OFF",
"cpu_count": 1
},
{
"memory_size_MiB": 512,
"vm": "vm-43",
"name": "NTP-PA-2",
"power_state": "POWERED_OFF",
"cpu_count": 1
},
{
"memory_size_MiB": 512,
"vm": "vm-44",
"name": "WebApp-1",
"power_state": "POWERED_ON",
"cpu_count": 1
},
{
"memory_size_MiB": 512,
"vm": "vm-45",
"name": "vThinkBeyondVM",
"power_state": "POWERED_ON",
"cpu_count": 1
}
]
}
[/python]
Line 16/17: As we got JSON response as pointed above, here we parsed it so that we can easily access as part of python dictionary.
Line 21 to 25: Iterating through dictionary and printing vm names & its moid (managed object id). Finally powering on VMs those are off.
That is all, is not it cool? Since we have REST APIs available for vCenter VM life cycle, VCSA, content library, tagging etc, there is lot to learn and play around. I will keep adding more methods to vcrest.py module. If you are interested in contributing to this module, let me know, it would be really great. In case, you would like to explore vCenter SOAP based APIs, please refer my last post
>.
|
https://vthinkbeyondvm.com/tag/vcenter-server/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
This article aims to teach how to do Generalized LR parsing to parse highly ambiguous grammars. I've provided some experimental code which implements it. I wanted to avoid making the code complete and mature because that will involve adding a lot of complexity to it, and this way it's a bit easier to learn. There's not a lot of information on GLR parsing online so I'm entering this article and the associated code into the public domain. All I ask is you mention honey the codewitch (that's me!) in your project somewhere if you use it.
GLR stands for Generalized Left-to-right Rightmost derivation. The acronym is pretty clunky in its expanded form, but what it means essentially is that it processes ambiguous parses (generalized) from left to right, and bottom-to-top. Where did bottom-to-top come from? It's hinted at with Rightmost derivation.
Parsers that use the right-most form create right associative trees and work from the leaves to the root. That's a bit less intuitive than working from the root downward toward the leaves, but it's a much more powerful way to parse, and unlike top-to-bottom parsing, it can handle left recursion (and right recursion) whereas a left recursive grammar would cause a top-to-bottom parse to stack-overflow or otherwise run without halting, depending on the implementation details. Either way, that's not desirable. Parsing from the leaves and working toward the root avoids this problem as well as providing greater recognizing power. The tradeoff is it's much harder to create translation grammars for it which can turn one language into another, and it's generally a bit more awkward to use due to the counterintuitiveness of building the tree upward.
A Generalized LR parser works by using any standard LR parsing underneath. An LR parser is a type of Pushdown Automata which means it uses a stack and a state machine to do its processing. An LR parser itself is deterministic but our GLR parser is not. We'll square that circle in a bit. First we need to outline the single and fundamental difference between a standard LR parse table and GLR parse table. Don't worry if you don't understand the table yet. It's not important to understand the specifics yet.
A standard LR parser parsing table/state table looks like this:
The above was generated using this tool.
It looks basically like a spreadsheet. Look at the cream colored cells though. See how there are two values? This causes an error with an LR parse table because it cannot hold two values per cell. The GLR parse table cells are arrays so they can hold multiple values. The above table, despite those cream colored conflicting cells, is perfectly acceptable to a GLR parser.
Any time there is a conflict like this, it's due to some localized or global ambiguity in the grammar. In other words, a grammar may not be ambiguous but if you're only looking ahead a little bit sometimes, it's not enough to disambiguate. It can be said that the production in the grammar is locally ambiguous. Otherwise, if the grammar is just generally ambiguous, then it's a globally ambiguity. Either way, this causes multiple values to be entered into the GLR table cell that handles that portion of the grammar.
No matter if it's local or global, an ambiguity causes the GLR parser to fork its stack and any other context information and try the parse each different way when presented with a multi-valued cell. If there is one value in the cell, no forking occurs. If there are two values, the parser forks its stack once. If there are three values, the parser forks its stack twice, and so on.
Since we're trying more than one path to parse, it can be said that our parser is non-deterministic. This impacts performance, and with a GLR parser performance is inversely proportional to how many forks are running. Additionally, each fork will result in a new parse tree. Note that this forking can be exponential in the case of highly ambiguous grammars. You pay the price for this ambiguity.
That's acceptable because GLR parsing was designed to parse natural/human language which can be highly ambiguous and the cost is baked in - to be expected in this case, and GLR is I believe still the most efficient generalized parsing algorithm to date since it only becomes non-deterministic if it needs to. As mentioned, rather than resolving to a single parse tree, a GLR parser will return every possible parse tree for an ambiguous parse, and again, that's how the ambiguity is handled whereas a standard LR parser will simply not accept an ambiguous grammar in the first place.
We'll begin by addressing a fundamental task - creating those parse tables like the one above from a given grammar. You can't generate these by hand for anything non-trivial so don't bother trying. If you need a tool to check your work, try this. It takes a grammar in Yacc format and can produce a parse table using a variety of LR algorithms.
You may need to read this to understand rules, grammars and symbols. While it was written to teach about LL(1) parsers, the same concepts such as rules, symbols (non-terminal and terminal) and FIRST/FOLLOWS sets are also employed for this style of parsing. It's highly recommended that you read it because it gives all of the above a thorough treatment. Since it's enough to be an article in its own right, I've omitted it here.
Let's kick this off with an excellent resource on generating these tables, provided by Stephen Jackson here. We'll be following along as we generate our table. I used this tutorial to teach myself how to generate these tables, and it says they are LALR(1) but I've been told this is actually an SLR(1) algorithm. I don't know enough about either to be certain either way. Fortunately, it really doesn't matter because GLR can use any LR algorithm underneath. The only stipulation is that the less conflicts that crop up in the table, the more efficient the GLR parser will be. Less powerful algorithms create more conflicts. Because of this, while you can use LR(0) for a GLR parser, it would significantly degrade the performance of it due to all the local ambiguity. The code I've provided doesn't implement LR(0) because there was no good reason to.
Let's begin with the grammar from the tutorial part way down the page at the link:
Generally, the first rule (0)/non-terminal S is created during the table generation process. We augment the grammar with it and point it to the actual start symbol which in this case would have been N, and then follow that with an end of input token (not shown)
That being said, we're going to overlook that and simply use the grammar as is. Note that the above grammar has its terminals as either lowercase letters like "x", or literal values like "=". We usually want to assign descriptive names to these terminals (like "equals") instead but this is fine here.
We need a data structure that's a lot like a rule with one important addition - an index property that indicates a position within the rule. For examine, we'd represent rule 0 from above using two of these structures like this:
S → • N
S → N •
The bullet is a position within the rule. We'll each data structure an LR(0) Item, or LR0Item. The first one above indicates that N is the next symbol to be encountered, while the next one indicates that N was just encountered. We need to generate sets of these.
LR0Item
We start at the start rule, and create an LR0Item like the first one above, with the bullet before the N. Now since the bullet is before a non-terminal (N), we need to add each of the N rules, with a bullet at the beginning. Now our itemset (our set of LR0Items) looks like this:
S → • N
N → • V = E
N → • E
So far so good but we're not done yet as now we have two new LR0Items that have a bullet before a non-terminal (V and E, respectively) so we must repeat the above step for those two rules which means our itemset will now look like this:
S → • N
N → • V = E
N → • E
E → • V
V → • x
V → • * E
Now our itemset is complete since the bullets are only before terminals or non-terminals that we've already added to the itemset. Since this is our first itemset, we'll call it itemset 0 or i0.
i0
Now all we do to make the rest of the itemsets is apply each possible input** to the itemset and increment the cursor on the accepting rules. Don't worry, here's an example:
We start by giving it x. There's only one entry in the itemset that will accept it next and that is the second to last LR0Item - V → • x so we're going to create our next itemset i1 with the result of that move:
i1
** We don't need to actually pass it each input. We only need to look at the next terminals already in the itemset. For i0 the next terminals are * and x. Those are the inputs we use.
V → x •
That is the single entry for i1. Now we have to go back to i0 and this time move on * which yields:
*
V → * • E
That is our single entry for itemset i2 so far but we're not done with it. Like before, a bullet is before a non-terminal E, so we need to add all the rules that start with E. There's only one in this case, E → • V which leaves us with this itemset so far:
i2
V → * • E
E → • V
Note that when we're adding new items, the cursor is at the start. Once again, we have a bullet before a non-terminal in the rule E → • V so we have to add all the V rules, which leaves is with our final itemset for i2:
V → * • E
E → • V
V → • x
V → • * E
Look, we encountered our V → • x and V → • * E and items from i0 again! This can happen and it's supposed to. They were added because of E → • V. Now we're done since the cursor is only before terminals or non-terminals that have already been added.
Now we need to move on i0 again, this time on the non-terminal V. Moving on V yields the following itemset i3:
V
i3
N → V • = E
E → V •
One additional difficulty of generating these itemsets is duplicates. Duplicate itemsets must be detected, and they must not be repeated.
Anyway, here are the rest of the itemsets in case you get stuck:
i4:
S → N •
i5:
N → E •
i6:
V → * E •
i7:
E → V •
i8:
N → V = • E
E → • V
V → • x
V → • * E
i9:
N → V = E •
Generating these is kind of tricky, perhaps moreso than it seems and I haven't laid out all we need to do above yet. While you're building these itemsets, you'll need to create some kind of transition map between the itemsets. For example, on i0 we can transition to i1 on x and to i2 on *. We know this because we worked it out while we were creating our i0 itemset. Stephen Jackson's tutorial has it laid out as a separate step but for expediency we want to roll it into our steps above. It makes things both easier and more efficient. Remember to detect and collate duplicate sets.
Now, I'll let you in on something I've held back up until now: Each itemset above represents a state in a state machine, and the above transitions are transitions between the states. We have 10 states for the above grammar, i0 through i9. Here's how I store and present the above data, as a state machine:
i9
sealed class LRFA
{
public HashSet<LR0Item> ItemSet = new HashSet<LR0Item>();
public Dictionary<string, LRFA> Transitions = new Dictionary<string, LRFA>();
// find every state reachable from this state, including itself (all descendants and self)
public ICollection<LRFA> FillClosure(ICollection<LRFA> result = null)
{
if (null == result)
result = new HashSet<LRFA>();
if (result.Contains(this))
return result;
result.Add(this);
foreach(var trns in Transitions)
trns.Value.FillClosure(result);
return result;
}
}
Basically, this allows you to store the computed transitions along with the itemset. The dictionary maps symbols to LRFA states. The name is ugly but it stands for LR Finite Automata, and each LRFA instance represents a single state in a finite state machine. Now that we have it we're going to pull a fast one on our algorithm, and rather than running the state machine, we're going to walk through it and create a new grammar with the state transitions embedded in it. That's our next step.
LRFA
The state numbers are different below than the demonstration above out of necessity. I wanted you to be able to follow the associated tutorial at the link, but the following was generated programmatically and I don't control the order the states get created in.
For this next step, we'll be walking through the state machine we just created following each transition. Start at i0. Here we're going from i0 to each transition, through the start symbol, to the end of the input as signified by $ in the below grammar, so first we write 0/S/$ as the rule's left hand side which signifies i0, the start symbol S and the special case end of input $ as there is no actual itemset for that. From there, we only have one transition to follow, on N which leads us to i1, so we write as the single right hand side 0/N/1 leaving us with:
0/S/$
0/N/1
0/S/$ → 0/N/1
Those are some ugly symbol names, but it doesn't matter, because the computer loves them, I swear. Truthfully, we need these for a couple of reasons. One, this will disambiguate transitions from rule to rule because now we have new rules for each transition possibility, and two we can use it later to get some lookahead into the table, which we'll get into.
Next we have to follow N so we do that, and repeat. Notice how we've create two rules with the same left hand here. That's because we have two transitions.
0/N/1 → 0/V/2 2/=/3 3/E/4
0/N/1 → 0/E/9
We only have to do this where LR0Items are at index 0. Here's what we're after:
0/S/$ → 0/N/1
0/N/1 → 0/V/2 2/=/3 3/E/4
0/N/1 → 0/E/9
0/V/2 → 0/x/6
0/V/2 → 0/*/7 7/E/8
0/E/9 → 0/V/2
3/E/4 → 3/V/5
3/V/5 → 3/x/6
3/V/5 → 3/*/7 7/E/8
7/E/8 → 7/V/5
7/V/5 → 7/x/6
7/V/5 → 7/*/7 7/E/8
Finally, we can begin making our parse table. These are often sparse enough that using a dictionary is warranted, although a matrix of nested arrays works too as long as you have a little more memory.
An LR parser has four actions it can perform: shift, reduce, goto and accept. The first two are primary operations and it's why LR parsers are often called shift-reduce parsers.
The parse table creation isn't as simple as it is for LL(1) parsers, but we've come pretty far. Now if you're using a dictionary and string symbols you'll want a structure like Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[] for a straight LR parse table or Dictionary<string,ICollection<(int RuleOrStateId, string Left, string[] Right)>>[] for a GLR parse table.
Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[]
Dictionary<string,ICollection<(int RuleOrStateId, string Left, string[] Right)>>[]
That's right, an array of dictionaries with a tuple in them, or in GLR's case, an array of dictionaries with a collection of tuples in them.
Now the alternative, using integer symbol ids can be expressed as int[][][] for straight LR or int[][][][] for GLR. Despite the efficiency, so many nests are confusing and it's best to use this form for generated parsers. You can create these arrays from a dictionary based parse table anyway. Below we'll be using the dictionary form.
int[][][]
int[][][][]
Initialize the parse table array with one element for each state. Above we had 10 states, so our declaration would look like new Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[10].
new Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[10]
Next compute the closure of our state machine we built earlier as a List<LRFA>. You can use the FillClosure() method by passing in a list. You can't use a HashSet<LRFA> here because we'll need a definite order and indexed access.
List<LRFA>
FillClosure()
HashSet<LRFA>
Create a list of itemsets (List<ICollection<LR0Item>>). For each LRFA in the closure, take its ItemSet and stash it in the aforementioned list.
List<ICollection<LR0Item>>
ItemSet
Now for each LRFA in the closure, treat its index in the closure as an index into your parse table's dictionaries array. The first thing you do is create the dictionary for that array entry.
Then go through each of the transitions in that LRFA. For any symbol, use that as a dictionary key. This will be a "shift" operation in the case of a terminal and a "goto" operation if it's a non-terminal. To create tuple for a shift or goto operation set RuleOrStateId to the index of the transition's destination state. You can find this index by using IndexOf() over the closure list.
RuleOrStateId
IndexOf()
Now while you're on that LRFA in the loop, scan its itemset looking for an LR0Item that has the Index at the very end and where the left hand of the associated rule is the start symbol, and if you find it add an entry to the parse table in that associated state's dictionary, key of the end symbol ($) and value of a tuple with RuleOrStateId of -1 and both Left and Right set to null. This indicates an "accept" action.
Left
Right
Now we need to fill in the reductions. This can be sort of complicated. Remember our extended grammar? It's time to use it. First, we take the FOLLOW sets for that grammar. This is non-trivial and lengthy to describe so I'm punting the details of doing it to this article I linked you to early on. The project contains a FillFollows() function that will compute the follows set for you.
FillFollows()
Now that you have them, you'll need to map each extended rule in the grammar to the follows for its left hand side. I use a dictionary for this, like Dictionary<CfgRule,ICollection<string>>, albeit with a catch. We need to merge some of these rules. I do this by creating a class implementing IEqualityComparer<CfgRule> and then writing a custom comparison which does two things - it grabs the final id from the rule's rightmost symbol and strips the rules of their annotations so they are original rules again. For example 0/N/1 → 0/V/2 2/=/3 3/E/4 would cause us to stash the number 4 at the end, and then strip the rule down to N → V = E, now reflecting the original rule in the non-extended grammar. If two of these unextended rules are the same and they share the same stashed number, then they can be merged into one so we return true from the Equals() function. Pass this comparer to the constructor of the above dictionary, so it will now do the work for us. As we add items just make sure you don't fail if there's already an item present. At most, you'll just merge their FOLLOW sets instead adding the new entry but I'm not sure this is necessary as I think they might always have the same FOLLOW set. My code plays it safe and attempts to merge existing entries' FOLLOW sets.
Dictionary<CfgRule,ICollection<string>>
IEqualityComparer<CfgRule>
Equals()
Note: In the case of epsilon/nil rules like A →, you'll need to handle them slightly differently below.
Now for each entry in your above map dictionary, you'll need to do the following:
Take the rule and its final number** - the final number is the index into your array of dictionaries that makes up your parse table, (remember that thing still?) For each entry entry in the FOLLOW items in your map entry's Value, add a tuple to the parse table. To create the tuple for this, simply strip the rule to its unextended form, find the index of the rule within the grammar, and then use that, along with the left and right hand side of the rule in the corresponding spots in the tuple. For example, if rule at index 1 is N → V = E then the tuple would be (RuleOrStateId: 1, "N" , new string[] { "V", "=", "E" }). Now, check to see if the dictionary already has an entry for this FOLLOW symbol. If it does, and it's the same, no problem. That may happen, but it's fine. If it does and it's not the same, this is a conflict. If the existing tuple indicates a shift, this is shift-reduce conflict. This isn't a parse killer. Sometimes, these are unavoidable such as the dangling else in C family languages. Usually, the shift will take precedence. If the existing tuple indicates a reduce action, this is a reduce-reduce conflict. This will stop a standard LR parser dead in its tracks. With a GLR parser, both alternatives will be parsed. Anyway, if this a GLR table and a tuple is already present, just add another tuple. If it's a regular LR parser, and it was a shift just pick an action (usually the shift) to take priority, and ideally issue a warning about the conflict to the user, but not with GLR. If the previous tuple is a reduce, issue an error, but not with GLR.
Value
(RuleOrStateId: 1, "N" , new string[] { "V", "=", "E" })
** For epsilon/nil rules, you'll need to take the final number from the left hand side of the rule.
We did it! That's a lot of work but now have a usable parse table. For production code, you may want to have some sort of progress indicator because this operation can take some significant time for real world grammars.
No matter the algorithm, be it LALR(1), SLR(1) or full LR(1) the parse tables are the same in overall structure. The GLR parse table contains one important difference in that the cells are arrays.
Because of this for LR, our parse code is the same regardless of the algorithm (excepting GLR). However, since GLR uses these principles to parse we will cover standard LR parsing here.
Parsing with LR is somewhat convoluted but overall simple once you get past some of the twists involved.
You'll need the parse table, a stack and a tokenizer. The tokenizer will give you tokens with symbols, values, and usually line/position info. If the tokenizer reports only symbol ids, you'll need to do a lookup to get the symbol, so you'll need at least an array of strings as well.
The parse table will direct the parser and use of the stack and input using the following directives:
If no entry is found, this is a syntax error.
Let's revisit Stephen Jackson's great work:
Let's walk through the first steps in parsing x = * x. The first step is to initialize the stack and read the first token, basically pushing 0 onto the stack
The first token we found in state 0 was an x, which indicates that we must shift to state 1. 1 is placed on the stack and we advance to the next token. That leads us to the following:
When we reduce, we report the rule for the reduction. Rule 4, V → x, has 1 token (x) on the right-hand side. Pop the stack once, leaving it with 0. In state 0, V (the left-hand side of rule 4) has a goto value of 3, so we push a 3 on the stack. This step gives us our table a new row:
Here's the whole process:
By itself, that doesn't look very useful but we can easily build trees with that information. Here, Stephen Jackson explains the results:
Stephen Jackson writes::
V(x) = V(* E(V(x))) V(x) = E(V( * E(V(x)))) N(V(x) = E(V( * E(V(x))))) S(N(V(x) = E(V( * E(V(x))))))).
V →.
Naturally, parsing with GLR is a bit more complicated than the above, but it is at least based on the same principles. With GLR, it's best to make an overarching parser class, and then delegate the actual parsing to a worker class. This way, you can run several parses concurrently by spawning multiple workers, which is what we need. Each worker manages its own stack, an input cursor and any other state.
The complication is not really in running the workers like one might think but rather managing the input token stream. The issue is that each worker might be in a slightly different position than the next even if we run them in lockstep so what we must do is create a sliding window to manage the input. The window should expand as necessary (when the workers get further apart from each other in the input) and shrink when possible (such as when a worker is removed or the workers move closer together.) I use my LookAheadEnumerator<Token> class to handle much of the bookkeeping on the actual window. The rest is making sure the window slides properly - once all the workers have stepped/advanced count the times each moved. Take the minimum of all of those and advance the primary cursor by that much. Finally, update the worker's input position to subtract that minimum value from its position. I've noticed many other GLR offerings requiring the entire input to be loaded into memory (basically as a string) before it will parse. If we had done that, this wouldn't be so difficult, but the flexibility is worth the tradeoff. This way, you can parse directly from a huge file or from a network stream without worrying.
string
The only issue for huge documents is you may want to use the pull parser directly rather than generating a tree, which could take an enormous amount of memory. The pull parser can be a little tricky to use, because while it works a bit like XmlReader, it also reads concurrently from different parses, meaning you'll have to check the TreeId every time after you Read() to see which "tree" you're currently working on.
XmlReader
TreeId
Read()
Anyway, in implementing this, we use the parse table exactly like before except when we encounter multiple tuples in the same cell. When we find more than one we must "fork" a new worker for each additional tuple which gets its own copy of the stack and its own independent input cursor (via LookAheadEnumerator<Token>) - whenever the main parser is called to move a step it chooses a worker to delegate to based on a simple round-robin scheduling scheme which keeps all the workers moving in lockstep. Removal of a worker happens once the worker has reached the end of the input or optionally when it has encountered too many errors in the parse tree, with "too many" being set by the user.
LookAheadEnumerator<Token>
There's a wrinkle in generating the parse trees that's related to the fact that we parse multiple trees at the same time. When the parser forks a worker (as indicated by the presence of a new TreeId that hasn't been seen yet we must (similarly to the parsing itself) clone the tree stack to accommodate it. Basically, we might be happily parsing along with worker TreeId of 1, having already built a partial tree when all of the sudden we fork and start seeing TreeId 2. When that happens, we must clone the tree stack from #1 and then continue, adding to each tree stack as appropriate. Finally, each tree id that ends up accepting winds up being a parse tree we return.
I have not provided you a parser generator, but simply an experimental parser and the facilities to define a grammar and produce a parser from it. A parser generator would work the same way except that the arrays we initialize the parser with would have been produced from generated code. That makes things just a little easier.
All of the directly relevant code is under namespace C. The other namespaces provide supporting code that isn't directly related to what we've done so far above. You'll see things like LexContext.cs in the project which exposes a text cursor as a LexContext class under namespace LC. We use that to help parse our CFG files, which we'll get to very soon, but it isn't related to the table building or parsing we were exploring above, as CFG documents are trivial and parsed with a hand rolled parser. CFG document here, means Context Free Grammar document. This is a bit of a misnomer with GLR since GLR can technically parse from a contextful grammar as well, but it still has all the properties of a regular CFG so the name is fine, in fact it's still preferable as all of the mathematical properties that apply to CFGs apply here too.
Open the solution and navigate to the project "scratch" in your IDE. This is our playground for twiddling with the parser.
Let's try it now. Create a new CFG document. Since I like JSON for examples, let's define one for JSON which we'll name json.cfg:
(This document is provided for you with the project to save you some typing.)
Great, but now what? First, we need to load this into a CfgDocument class:
CfgDocument
var cfg = CfgDocument.ReadFrom(@"..\..\json.cfg");
cfg.RebuildCache(); // not necessary, but recommended
The second line isn't necessary but it's strongly recommended especially when dealing with large grammars. Any time you change the grammar, you'll need to rebuild the cache. Since we don't intend to change it, just to load it and generate tables from that means we can cache it now.
There's also a Parse() function and a ReadFromUrl() function. ToString() fulfills the correlating conversion back into the above format (except longhand, without using | ) while an overload of it can take "y" as a parameter which causes it to generate the grammar in Yacc format for use with tools like the JISON visualizer I've linked to prior.
Parse()
ReadFromUrl()
ToString()
|
A CfgDocument is made up of Rules as represented by CfgRule. In the above scenario, we loaded these rules from a file but you can add, remove and modify them yourself. The rule has a left hand and right hand side which are composed of symbols. Meanwhile, each symbol is represented by a string, with the reserved terminal symbols #EOS and #ERROR being automatically produced by CfgDocument. Use the document to query for terminals, non-terminals and rules. You can also use it to create FIRST, FOLLOW, and PREDICT sets. All of these are accessed using the corresponding FillXXXX() functions. If you remember, we use the FOLLOW sets to make our LR and GLR parse tables above. This is where we got them from, except our CFG was an extended grammar rules like 0/S/$ -> 0/N/1 as seen before.
Rules
CfgRule
#EOS
#ERROR
FillXXXX()
This is all well and good, but what about what we're actually after? - parse tables! Remember that big long explanation on how to generate them? I've provided all of that in CfgDocument.LR.cs which provides TryToLR1ParseTable() and TryToGlrParseTable(). The former function doesn't generate true LR(1) tables because those would be huge. Instead, it takes a parameter which tells us what kind of LR(1) family of table we want. Currently the only option is Lalr1 for LALR(1), but that suits us just fine. TryToGlrParseTable() will give us the GLR table we need.
TryToLR1ParseTable()
TryToGlrParseTable()
Lalr1
Each of these functions returns a list of CfgMessage objects which can be use to report any warnings or errors that crop up. There won't be any for GLR table creation since even conflicting cells (ambiguous grammars) are valid, but I've provided it just the same for consistency. Enumerate through the messages and report them or simply pass them to CfgException.ThrowIfErrors() to throw if any of the messages were ErrorLevel.Error. The out value gives us our CfgLR1ParseTable or our CfgGlrParseTable, respectively. Now that we have those, we have almost enough information to create a parser.
CfgMessage
CfgException.ThrowIfErrors()
ErrorLevel.Error
CfgLR1ParseTable
CfgGlrParseTable
One thing we'll need in order to parse is some sort of implementation of IEnumerable<Token>. I've provided two of them with the project, one for JSON called JsonTokenizer and one for the Test14.cfg grammar which is ambiguous - for trying the GLR parsing. That tokenizer is called Test14Tokenizer. The other way to create tokens is simply make a List<Token> instance and manually fill it with tokens you create. You can pass that as your "tokenizer". The tokenizers I've provided were generated by Rolex although they had to be modified to take an external Token definition. Eventually, I'll add an external token feature to Rolex but it's not important as our scenario here is somewhat contrived. In the real world, we'd be generating the code for our parsers and those won't suffer the same problem because all the generated code can share the same Token declaration.
IEnumerable<Token>
JsonTokenizer
Test14.cfg
Test14Tokenizer
List<Token>
Token
On to the simpler things. One is the symbol table which we can get by simply calling FillSymbols() and converting that to a string[]. The other is the error sentinels (int[]) which will require some explanation. In the case of an error, the parsers will endeavor to keep on parsing so that all errors can be found in a single pass. This is easier said than done. We use a simple technique called "panic mode" error recovery that is a form of local error recovery. This requires that we define a safe ending point for when we encounter an error. For languages like C, this can mean ;/semi and }/rbrace which is very reasonable. For JSON, we'll use ]/rbracket, ,/comma and }/rbrace. When an error happens, we gather input tokens until we find one of those, and then pop the stack until we find a valid state for a sentinel or run out of states to pop. Ideally, we'd only use this method for a standard LR parser, and use a form of global error recovery for the GLR parser. However, the latter is non-trivial and not implemented here, even though it would result in better error handling. Our errorSentinels array will contain the indices into the symbol table for our two terminals.
FillSymbols()
string[]
int[]
;/semi
}/rbrace
]/rbracket
,/comma
errorSentinels
Finally, we can create a parser. Here's the whole mess from beginning to end:
var cfg = CfgDocument.ReadFrom(@"..\..\json.cfg");
cfg.RebuildCache();
// write our grammar out in YACC format
Console.WriteLine(cfg.ToString("y"));
// create a GLR parse table. We can use a standard LR parse table here.
// We're simply demoing the GLR parser
CfgGlrParseTable pt;
var msgs = cfg.TryToGlrParseTable(out pt);
// there shouldn't be any messages for GLR, but this is how we process them if there were
foreach(var msg in msgs)
Console.Error.WriteLine(msg);
CfgException.ThrowIfErrors(msgs);
// create the symbol table to map symbols to indices which we treat as ids.
var syms = new List<string>();
cfg.FillSymbols(syms);
// our parsers don't use the parse table directly.
// they use nested arrays that represent it for efficiency.
// we convert it using ToArray() passing it our symbol table
var pta= pt.ToArray(syms);
// now create our error sentinels. If this is an empty array only #EOS will be considered
var errorSentinels = new int[] { syms.IndexOf("rbracket"), syms.IndexOf("rbrace") };
// let's get our input
string input;
using (var sr = new StreamReader(@"..\..\data2.json"))
input = sr.ReadToEnd();
// now make a tokenizer with it
var tokenizer = new JsonTokenizer(input);
var parser = new GlrTableParser(pta, syms.ToArray(), errorSentinels,tokenizer);
// uncomment the following to display the raw pull parser output
/*
while(parser.Read())
{
Console.WriteLine("#{0}\t{1}: {2} - {3}",
parser.TreeId, parser.NodeType, parser.Symbol, parser.Value);
}
// reset the parser and tokenizer
tokenizer = new JsonTokenizer(input);
parser = new GlrTableParser(pta, syms.ToArray(), errorSentinels,tokenizer);
*/
Console.WriteLine();
Console.WriteLine("Parsing...");
Console.WriteLine();
// now for each tree returned, dump it to the console
// there will only be one unless the grammar is ambiguous
foreach(var pn in parser.ParseReductions())
Console.WriteLine(pn);
To try with different grammars, simply switch out the grammar filename, the error sentinels, the tokenizer and the input. Test14.cfg is a small ambiguous grammar that can be used to test. I recommend it with an input string like bzc.
bzc
Anyway, for the JSON grammar and the associated input, we get:
%token lbrace rbrace comma string colon lbracket rbracket number null true false
%%;
Parsing...
+- json
+- object
+- lbrace {
+- fields
| +- field
| +- string "backdrop_path"
| +- colon :
| +- value
| +- string "/lgTB0XOd4UFixecZgwWrsR69AxY.jpg"
+- rbrace }
Meanwhile, doing it for Test14 as suggested above yields two parse trees:
%token a c d b z
%%
S : a A c
| a B d
| b A d
| b B c;
A : z;
B : z;
Parsing...
+- A
+- b b
+- A
| +- z z
+- #ERROR c
+- S
+- b b
+- B
| +- A
| +- z z
+- c c
The first one was no good as it tried the wrong reduce - or rather, that reduce didn't pan out this time but it theoretically may have with the right grammar and given a different input string. You'll note the error recovery is still poor, and this is still a work in progress. I'm experimenting with different techniques to improve it so that it will continue without clearing the stack in so many cases. In any case, our second tree resulted in a valid parse. With some grammars, multiple trees may be valid and error free due to ambiguity. For some inputs, every tree might have errors, but the errors might be different in each one. Figuring out which tree to pick is not the GLR parsers job. It depends heavily on what you intend to do with it. With C# for example, you might get many different trees due to the ambiguity of the language, but applying type information can narrow the trees down to the single valid tree the code represents.
GlrTableParser delegates to GlrWorker which implements an LR(1) parser in its own right. We spawn one of these workers for each path through the parse. Only the first one is created by GlrTableParser itself. The workers create themselves after that during the parse. At each fork, we create a new one for each alternate path which can yield exponential GlrWorker creation in highly ambiguous grammars, so make your grammars tight to get the best performance. We know we've encountered a fork when there are multiple entries in the parse table cell at our current state and position. Here, we see the lookup in the parse table in GlrWorker.Read(). Note the code in bold, and how it spawns more workers for every tuple after the first one.
GlrTableParser
GlrWorker
GlrWorker.Read()
public bool Read()
{
if(0!=_errorTokens.Count)
{
var tok = _errorTokens.Dequeue();
tok.SymbolId = _errorId;
CurrentToken = tok;
return true;
}
if (_continuation)
_continuation = false;
else
{
switch (NodeType)
{
case LRNodeType.Shift:
_ReadNextToken();
break;
case LRNodeType.Initial:
_stack.Push(0);
_ReadNextToken();
NodeType = LRNodeType.Error;
break;
case LRNodeType.EndDocument:
return false;
case LRNodeType.Accept:
NodeType = LRNodeType.EndDocument;
_stack.Clear();
return true;
}
}
if (0 < _stack.Count)
{
var entry = _parseTable[_stack.Peek()];
if (_errorId == CurrentToken.SymbolId)
{
_tupleIndex = 0;
_Panic();
return true;
}
var tbl = entry[CurrentToken.SymbolId];
if(null==tbl)
{
_tupleIndex = 0;
_Panic();
return true;
}
int[] trns = tbl[_tupleIndex];
// only create more if we're on the first index
// that way we won't create spurious workers
if (0 == _tupleIndex)
{
for (var i = 1; i < tbl.Length; ++i)
{
_workers.Add(new GlrWorker(_Outer, this, i));
}
}
if (null == trns)
{
_Panic();
_tupleIndex = 0;
return true;
}
if (1 == trns.Length)
{
if (-1 != trns[0]) // shift
{
NodeType = LRNodeType.Shift;
_stack.Push(trns[0]);
_tupleIndex = 0;
return true;
}
else
{ // accept
//throw if _tok is not $ (end)
if (_eosId != CurrentToken.SymbolId)
{
_Panic();
_tupleIndex = 0;
return true;
}
NodeType = LRNodeType.Accept;
_stack.Clear();
_tupleIndex = 0;
return true;
}
}
else // reduce
{
RuleDefinition = new int[trns.Length - 1];
for (var i = 1; i < trns.Length; i++)
RuleDefinition[i - 1] = trns[i];
for (int i = 2; i < trns.Length; ++i)
_stack.Pop();
// There is a new number at the top of the stack.
// This number is our temporary state. Get the symbol
// from the left-hand side of the rule #. Treat it as
// the next input token in the GOTO table (and place
// the matching state at the top of the set stack).
// - Stephen Jackson,
var state = _stack.Peek();
var e = _parseTable[state];
if (null == e)
{
_Panic();
_tupleIndex = 0;
return true;
}
_stack.Push(_parseTable[state][trns[1]][0][0]);
NodeType = LRNodeType.Reduce;
_tupleIndex = 0;
return true;
}
}
else
{
// if we already encountered an error
// return EndDocument in this case, since the
// stack is empty there's nothing to do
NodeType = LRNodeType.EndDocument;
_tupleIndex = 0;
return true;
}
}
Refer to the tutorial on running a parse given earlier. See how we initially grab the tuple given by _tupleIndex and then reset it to zero? That's because we only want to take the alternate path once. The worker only takes an alternate path the first time it is Read() and after that, it spawns additional workers for each of the alternate paths it encounters, wherein they revert to the first path after the inital Read() as well, and spawn more workers for any alternates they encounter and so on. Yes, again, this can yield exponential workers. It's the nature of the algorithm. Walking each possible path requires exponential numbers of visits for each choice that can be made.
_tupleIndex
Also note how we report the rule definition used during a reduction. This is critical so that the user of the parser can match terminals back to the rules they came from. It's simply stored as int[] where index zero is the left hand side's symbol id, and the remainder are the ids for the right hand symbols.
Another issue is in creating a new worker from an existing worker. We must copy its stack and other state and receive an independent input cursor and a tuple index to tell us which path to take on the initial read. On the initial read, we skip the first part of the routine which is what _continuation is for - we're restarting the parse from where we left off. Here's the constructor for the worker that takes an existing worker:
_continuation
public GlrWorker(GlrTableParser outer,GlrWorker worker,int tupleIndex)
{
_Outer = outer;
_parseTable = worker._parseTable;
_errorId = worker._errorId;
_eosId = worker._eosId;
_errorSentinels = worker._errorSentinels;
ErrorTokens = new Queue<Token>(worker.ErrorTokens);
_tokenEnum = worker._tokenEnum;
var l = new List<int>(worker._stack);
l.Reverse();
_stack = new Stack<int>(l) ;
Index = worker.Index;
_tupleIndex = tupleIndex;
NodeType = worker.NodeType;
Id = outer.NextWorkerId;
CurrentToken = worker.CurrentToken;
unchecked { ++outer.NextWorkerId; }
_continuation = true;
_workers = worker._workers;
}
Here, we create our new worker, using a somewhat awkward but necessary way to the clone the stack - I really should use my own stack implementation to solve this but I haven't for this code. We assign a new id to the worker, we indicate that it's a continuation, copy our parse table and error sentinel references and create a queue to hold errors we need to report. It's not shown here but _Outer is actually a property that wraps _outer, itself a WeakReference<GlrTableParser>. This is to avoid circular references which create strain on the garbage collector. The parser will always live at least as long as its workers so a weak reference is inconsequential for us but not for the GC. The Index property serves as an offset into the current input. We need this for our sliding window technique mentioned earlier. We simply take the index from the current worker since we're in the same logical position. _tupleIndex once again tells us which path to take on this next fork.
_Outer
_outer
WeakReference<GlrTableParser>
Index
That covers the meat of our worker class. Let's cover the GlrTableParser that delegates to it. Mainly, we're concerned with the Read() method which does much of the work:
public bool Read()
{
if (0 == _workers.Count)
return false;
_workerIndex = (_workerIndex + 1) % _workers.Count;
_worker = _workers[_workerIndex];
while(!_worker.Read())
{
_workers.RemoveAt(_workerIndex);
if (_workerIndex == _workers.Count)
_workerIndex = 0;
if (0 == _workers.Count)
return false;
_worker = _workers[_workerIndex];
}
var min = int.MaxValue;
for(int ic=_workers.Count,i=0;i<ic;++i)
{
var w = _workers[i];
if(0<i && w.ErrorCount>_maxErrorCount)
{
_workers.RemoveAt(i);
--i;
--ic;
}
if(min>w.Index)
{
min=w.Index;
}
if (0 == min)
break;
}
var j = min;
while(j>0)
{
_tokenEnum.MoveNext();
--j;
}
for (int ic = _workers.Count, i = 0; i < ic; ++i)
{
var w = _workers[i];
w.Index -= min;
}
return true;
}
Here if we don't have any workers, we're done. Then we increment the _workerIndex, round-robin like and use that to choose the current worker we delegate to among all the current workers. For each worker, starting at the _workerIndex, we try to Read() from the next worker, and if it returns false then we remove it from the _workers and try the next worker. We continue this process until we find a worker that read or we run out of workers. If we're out of workers we return false.
_workerIndex
_workers
Now, after a successful Read(), we check all the workers for how much they've advanced along the current primary cursor by checking their Index. We do this for all the workers because some may not have been moved forward during the last Read() call. Anyway, the minimum advance is what we want to slide our window by, so we increment the primary cursor by that many. Next, we fix all of the workers Indexes by subtracting that minimum value from them. Finally, we return true to indicate a successful read.
One more thing of interest about the parser is the ParseReductions() method which will return trees from the reader. The trees of course, are much easier to work with then the raw reports from the pull parser. Here's the method:
ParseReductions()
public ParseNode[] ParseReductions(bool trim = false, bool transform = true)
{
var map = new Dictionary<int, Stack<ParseNode>>();
var oldId = 0;
while (Read())
{
Stack<ParseNode> rs;
// if this a new TreeId we haven't seen
if (!map.TryGetValue(TreeId,out rs))
{
// if it's not the first id
if (0 != oldId)
{
// clone the stack
var l = new List<ParseNode>(map[oldId]);
l.Reverse();
rs = new Stack<ParseNode>(l);
}
else // otherwise create a new stack
rs = new Stack<ParseNode>();
// add the tree id to the map
map.Add(TreeId, rs);
}
ParseNode p;
switch (NodeType)
{
case LRNodeType.Shift:
p = new ParseNode();
p.SetLocation(Line, Column, Position);
p.Symbol = Symbol;
p.SymbolId = SymbolId;
p.Value = Value;
rs.Push(p);
break;
case LRNodeType.Reduce:
if (!trim || 2 != RuleDefinition.Length)
{
p = new ParseNode();
p.Symbol = Symbol;
p.SymbolId = SymbolId;
for (var i = 1; RuleDefinition.Length > i; i++)
{
var pc = rs.Pop();
_AddChildren(pc, transform, p.Children);
if ("#ERROR" == pc.Symbol)
break;
}
rs.Push(p);
}
break;
case LRNodeType.Accept:
break;
case LRNodeType.Error:
p = new ParseNode();
p.SetLocation(Line, Column, Position);
p.Symbol = Symbol;
p.SymbolId = _errorId;
p.Value = Value;
rs.Push(p);
break;
}
oldId = TreeId;
}
var result = new List<ParseNode>(map.Count);
foreach (var rs in map.Values)
{
if (0 != rs.Count)
{
var n = rs.Pop();
while ("#ERROR" != n.Symbol && 0 < rs.Count)
_AddChildren(rs.Pop(), transform, n.Children);
result.Add(n);
}
}
return result.ToArray();
}
This is just an iterative way of interpreting the parse results as Stephen Jackson covered above, except his was recursive and was only dealing with one parse tree. It would be very difficult, if not impossible to implement this recursively given that different tree information can be returned in any order.
Stay tuned for GLoRy, a parser generator that takes this code to the next level to generate parsers for virtually anything parseable..
|
https://www.codeproject.com/Articles/5259825/GLR-Parsing-in-Csharp-How-to-Use-The-Most-Powerful
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
QUrl toLocalFile
I have a QUrl that I am creating from a string path. The path contains the # character in one of the folder names. The issue I am having is that when I do a .toLocalFile it stops at the # character. If I do a .toString it returns it fine. I also noticed that .toEncoded doesn't encode the # for some reason. Any ideas?
- SGaist Lifetime Qt Champion last edited by
Hi,
AFAIK, # in a url will be considered a fragment so it might be that that you are hitting. However I don't know how it's handled for a local file path
- dbzhang800 last edited by
Hi, I can not reproduce this problem under Qt5
#include <QtCore> int main(int argc, char *argv[]) { QString file("D:\\a#b\\aaa#bbbb.txt"); QUrl url(""); qDebug()<<url.toLocalFile(); qDebug()<<QUrl::fromLocalFile(file); return 0; }
The output is
"D:/a#b/aaa#bbbb.txt" QUrl("")
It turned out I had a lot more wrong with my path handling that just this. The differences between url and local path are a little confusing. I was grabbing the local path from a QDirIterator and then tacking on "". Then I would create a QUrl from this. I think the QUrl then assumes its already in url format and doesn't encode the characters or something like that.
I was geting the initial path from a QDirIterator.
QString filename = it.next();
In this case the path I get is
D:/Test#/Test/file.txt
Then I if I run this code.
QUrl url("" + filename);
qDebug() << url;
qDebug() << url.toLocalFile();
The output is
QUrl("")
"D:/Test"
I noticed the .fromLocalFile earlier, but didn't notice when I passed it in with the already on, it was adding another. When I took a second look I released that adding it on myself was not correct. I needed to use .fromLocalFile on the file name directly coming out of the QDirIterator. Once I did this it worked great.
|
https://forum.qt.io/topic/56430/qurl-tolocalfile
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Today I learned: resetting the Beam in Space Center
(This is something I posted on Slack, but I figured it might be useful to cross-post here for others.)
I can’t get my space center “Beam” to show up. This was a problem for me in 4.1 (but earlier as well, maybe) and now is still a problem in 4.2.
@ryan helpfully responded with the following code, asking "what’s this output?":
from lib.tools.defaults import getDefaultColor from lib.tools.misc import NSColorToRgba from mojo.UI import CurrentSpaceCenter beam_color = NSColorToRgba(getDefaultColor("spaceCenterBeamStrokeColor")) csc = CurrentSpaceCenter() print(beam_color) print(csc.beam()) csc.setBeam(500)
Turns out, my beam was a visible color but set super high:
(0.0, 0.0108, 0.9982, 1.0) 7856
And the
setBeam()method fixed that. Thanks, Ryan!
@frederik added to the conversation:
makes me think, of an alt menu title for “Beam”: “Reset Beam” which restores the value to the original value (half of the height)
...which is an idea I like!
I’m not sure what to tag this, but I guess
Feature Requestmakes sense, as a feature might help others avoid this confusion in the future. After all, I literally went many months without a beam before asking for help, because I figured it was something that was just broken and might be fixed in a future update. 😅 Glad I finally asked, but a "Reset Beam" option could have helped on my most recent serif project!
|
https://forum.robofont.com/topic/1034/today-i-learned-resetting-the-beam-in-space-center/1
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Pick guest CPU architecture based on host arch in libvirt driver¶
Implement new image meta property that allows for the selection of the correct QEMU binary, cpu architecture, and machine type for a guest architecture that is different than the host architecture; An x86_64 guest running on an AArch64 host, and vice versa.
Problem description¶
Currently, in many places, Nova’s libvirt driver makes decisions on how
to configure guest XML based on host CPU architecture
caps.host.cpu.arch. That is not optimal in all cases where physical
hardware support is limited for non-traditional architectures.
So all of the said code needs to be reworked to make those decisions
based on
guest CPU architecture (i.e.
guest.arch, which should be
set based on the image metadata property,
hw_emulation_architecture).
A related piece of work is to distinguish between hosts that can do AArch64,
PPC64, Etc. via KVM (which is hardware-accelerated) vs. those that can only
do it via plain emulation
TCG — this is to ensure that guests are not
arbitrarily scheduled on hosts that are incapable of hardware acceleration,
thus losing out on performance-related benefits.
Use Cases¶
As an admin/operator I want to allow for cpu architecture emulation due to constraints of or lack with alternate physical architecture types.
As an admin/operator I want to deploy AArch64, PPC64, MIPs, RISC-V, and s390x as an emulated architecture on x86_64.
As an admin/operator I want to deploy x86_64, PPC64, MIPs, RISC-V, and s390x as an emulated architecture on AArch64.
Proposed change¶
To enable this new cpu architecture spec, an image property will be introduced and an additional function which allows for checks and comparisions between the host architecture and desired emulation architecture
Note
The following
hw_architecture image property relates to the physical
architecture of the compute hosts. If physical nodes are not present for
the desired architecture then the instance will not be provisioned.
Retrieve OS architecture for LibvirtConfigGuest¶
This leverages nova virt libvirt config to grab the
os_arch and update
the
hw_architecture image meta property with the retrieved value. With
this change we can perform the required comparisons within nova virt libvirt
driver for the
hw_architecture and
hw_emulation_architecture values.
if self.os_arch is not None: type_node.set("arch", self.os_arch)
Allow emulation architecture to be defined by image property¶
To enable defining the guest architecture the following string based image meta property will be introduced:
hw_emulation_architecture
When this image property is not defined then instance provisioning will occur as normal. The process is demonstrated below via the 3 examples.
Example 1 When both image meta properties are set the emulation architecture will take precedent, and it will build on a X86_64 host that supports emulatating AARCH64 or whatever supported architecture is inputted in place of AARCH64.
hw_emulation_architecture = AARCH64
hw_architecture = X86_64
Example 2 When the emulation image meta property is set the emulation architecture will take precedent, and it will build on any host that supports emulating X86_64 or whatever supported architecture is inputted in place of X86_64.
hw_emulation_architecture = X86_64
hw_architecture = <unset>
Example 3 When the
hw_emulation_architecture property is unset it
will build on any host that natively supports the specified architecture.
hw_emulation_architecture = <unset>
hw_architecture = AARCH64OR
hw_architecture = X86_64
Update scheduler request_filter to handle both architecture fields¶
Within the
transform_image_metadata function, we will add the two
architecture properties to the
prefix_map. this in itself also requires
additional os-traits to be added for both hw and compute.
def transform_image_metadata(ctxt, request_spec): """Transform image metadata to required traits. This will modify the request_spec to request hosts that support virtualisation capabilities based on the image metadata properties. """ if not CONF.scheduler.image_metadata_prefilter: return False prefix_map = { 'hw_cdrom_bus': 'COMPUTE_STORAGE_BUS', 'hw_disk_bus': 'COMPUTE_STORAGE_BUS', 'hw_video_model': 'COMPUTE_GRAPHICS_MODEL', 'hw_vif_model': 'COMPUTE_NET_VIF_MODEL', 'hw_architecture': 'HW_ARCH', 'hw_emulation_architecture': 'COMPUTE_ARCH', }
Update os-traits¶
Below are the os-traits proposed for the compute cpu architectures to be supported for emulatation, where as the hardware architecture includes all current nova supported architectures within nova objects fields.
TRAITS = [ 'AARCH64', 'PPC64LE', 'MIPSEL', 'S390X', 'RISCV64', 'X86_64', ]
To account for the emulation of these architectures, updates will be made to the nova virt libvirt driver ensuring that compute capability traits are reported for each architecture emulator that is available on the hosts.
Perform architecture test against emulation¶
To facilitate a simple check throughout the nova virt libvirt driver the following function does a check and will set the appropriate guest architecture based on emulation, if defined.
def _check_emulation_arch(self, image_meta): emulation_arch = image_meta.properties.get("hw_emulation_architecture") if emulation_arch: arch = emulation_arch else: arch = libvirt_utils.get_arch(image_meta) return arch
Utilization of the actual check performed through processing the image_meta dictionary values.
arch = self._check_emulation_arch(image_meta)
Proposed emulated architectures and current support level¶
All testing performed with changes proposed in this spec demonstrated that
the emulated guests maintain current support for all basic lifecycle actions.
Listed below are the proposed architectures and there current functional
level with the spec, with the plan of all being
Tested and validated for
functional support.
X86_64- Tested and validated for functional support
AARCH64- Tested and validated for functional support
PPC64LE- Tested and validated for functional support
MIPSEL- Awaiting libvirt patch for PCI support
S390X- Troubleshooting guest kernel crash for functional support
RISCV64- To be Tested
Alternatives¶
Other attempts have been made leverage existing image meta properties such
as
hw_architecture only; however, this opens various other issues with
conflicting check and alterations of core code. This also runs into issues
during the scheduling of instances as there will be no matching physical
host architectures, which is what this spec aims to solves.
While the best option is providing actual physical support for the cpu architectures you want to test, this opens the ability to a wider audience to perform the same type of local emulation they can with QEMU within an openstack environment.
Data model impact¶
Adds a new set of standard traits to os-traits.
Adds new property to image_meta objects.
The OS arch value will be pulled into the
LibvirtConfigGuest.
REST API impact¶
None
Security impact¶
None
Notifications impact¶
None
Other end user impact¶
None
Performance Impact¶
This is expected to improve boot performance in a heterogeneous cloud by reducing reschedules. By passing a more constrained request to placement this feature should also reduce the resulting set of allocation_candidates that are returned.
This will also ensure that native support is handled first over emulation as it requires a specific property to be set in order to perform the required checks.
Other deployer impact¶
Ensure that all the desired QEMU binaries are installed on the physical nodes for the cpu architectures that you would like to support.
Developer impact¶
None
Upgrade impact¶
None
Implementation¶
Assignee(s)¶
- Primary assignee:
chateaulav - Jonathan Race
Feature Liaison¶
- Feature liaison:
Liaison Needed
Work Items¶
Add new traits
Update prefilter
Modify nova libvirt virt driver to perform checks for emulation architecture
Add new property to image_meta objects
Modify nova libvirt virt config to pull OS arch into LibvirtConfigGuest
Tests
Dependencies¶
Blueprint
Project Changesets
Libvirt MIPs PCI Bug
Testing¶
Unit tests will be added for validation of the following proposed changes:
nova virt libvirt driver to validate handling of the
hw_emulation_architectureimage property value and associated checks.
nova scheduler request_filter to ensure proper handling of the prefilter, with added the two new values.
Proposed updates to tempest will account for the non-native architectures being supported through emulation.
AARCH64 architecture will be tested with every patch
Remaining architectures will be tested with the
periodic-weeklyand
experimentalpipelines.
Documentation Impact¶
A release note will be added. As there is enduser impact, user facing documentation will be required for the supported emulation architecture types and the required image meta properties to need to be set.
|
https://specs.openstack.org/openstack/nova-specs/specs/yoga/implemented/pick-guest-arch-based-on-host-arch-in-libvirt-driver.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
An implementation of extensibility hook is discovered using the Service
Provider mechanism. If the service provider file required by the extension
hook is present in the classpath in META-INF/services directory,
then JAX-WS tool time or runtime (whatever is appropriate) picks up the
implementation advertised by the service provider and invokes the relevant
methods. The implementation of these methods is provided with the information
required to perform the intended functionality.
META-INF/services
The different extensibility points, to impact the WSDL/Service Endpoint
Interface generation and consumption, enabled by JAX-WS RI are explained next.
This extension hook, defined by com.sun.tools.ws.api.wsdl.TWSDLExtensionHandler
service provider, allows participation in the tool time WSDL parsing for the extensibility
elements in a WSDL. This enables an extension handler to be registered with
the JAX-WS WSDL importing tool, wsimport, for a namespace different
from that of WSDL 1.1. This allows such handlers to retrieve information from
WSDL elements, and use that later to, for example, generating annotations. The
appropriate method in the registered extension handler is invoked if an
extensibility element in that namespace is encountered during tool time WSDL
parsing.
For example, JAX-WS RI registers a tool time WSDL parsing extension handler, com.sun.tools.ws.wsdl.parser.W3CAddressingExtensionHandler,
for
the
(W3C WS-Addressing namespace and bound to wsaw prefix) namespace. The handleInputExtension
method is invoked if wsdl:portType/wsdl:operation/wsdl:input
contains wsaw:Action attribute. Similarly other methods from TWSDLExtensionHandler,
where WS-Addressing elements can occur and need to be handled, are overridden as
well.
com.sun.tools.ws.api.wsdl.TWSDLExtensionHandler
wsimport
com.sun.tools.ws.wsdl.parser.W3CAddressingExtensionHandler
wsaw
handleInputExtension
wsdl:portType/wsdl:operation/wsdl:input
wsaw:Action
TWSDLExtensionHandler.TWSDLExtension,
be created and added as an extension to com.sun.tools.ws.api.wsdl.TWSDLExtensible
(for example com.sun.tools.ws.api.wsdl.TWSDLOperation)
to store any information required later.
com.sun.tools.ws.api.wsdl.TWSDLExtension
com.sun.tools.ws.api.wsdl.TWSDLExtensible
com.sun.tools.ws.api.wsdl.TWSDLOperation)
The tool time WSDL parsing creates a complete DOM tree of the WSDL and thus indexing
by the namespace URI works efficiently.
Table 1 gives the service provider file name and the
corresponding JAX-WS implementation class name.
This extension hook, defined by com.sun.tools.ws.api.TJavaGeneratorExtension
service provider, allows additional annotations to be generated on a Java
method during the WSDL-to-Java mapping. It may use the information stored in an
internal data structure or com.sun.tools.ws.api.wsdl.TWSDLExtension
element stored during the tool time WSDL parsing stage.
com.sun.tools.ws.api.TJavaGeneratorExtension
For example, JAX-WS RI uses com.sun.tools.ws.processor.generator.W3CAddressingJavaGeneratorExtension
to generate @javax.xml.ws.Action
annotation on a generated Java method, using information stored in an internal
structure.
com.sun.tools.ws.processor.generator.W3CAddressingJavaGeneratorExtension
@javax.xml.ws.Action
This extension hook, defined by com.sun.xml.ws.api.wsdl.parser.WSDLParserExtension
service provider, allows participation in runtime WSDL parsing for the
extensibility elements in the WSDL. This allows the registered handlers to
retrieve information from WSDL elements, and use that later to, for example,
generate wsa:Action in the SOAP message. At runtime, all registered
handlers are invoked for an extension element. The handler then makes a decision
about whether it is interested in processing the current element, by peeking at
the QName of the element, or ignore it. This is different from tool-time parsing
where a complete DOM tree of the WSDL allows namespace indexing and thus calls
the correct extension handler.
com.sun.xml.ws.api.wsdl.parser.WSDLParserExtension
wsa:Action
For example, JAX-WS RI registers a runtime WSDL parsing extension handler, com.sun.xml.ws.wsdl.parser.W3CAddressingWSDLParserExtension,
for processing WS-Addressing extension elements in the WSDL. As mentioned
earlier,.WSDLExtension,
be created and added as an extension to com.sun.xml.ws.api.model.wsdl.WSDLOperation
to store any information required later.
com.sun.xml.ws.wsdl.parser.W3CAddressingWSDLParserExtension
com.sun.tools.ws.api.wsdl.WSDLExtension
com.sun.xml.ws.api.model.wsdl.WSDLOperation
To illustrate further, if there is an extension element in the WSDL in wsdl:portType/wsdl:operation/wsdl:input,
then portTypeOperationInput
method is invoked for all the registered extension handlers. If wsdl:input
contains wsaw:Action attribute then it's value is stored in an
internal data structure and later used for generating
wsa:Action on the SOAP message. Otherwise default
Action is generated, as defined by WS-Addressing
1.0 - WSDL Binding, by querying other data from the WSDL.
portTypeOperationInput
wsdl:input
wsa:Action
Another example is where WSIT defines com.sun.xml.ws.policy.jaxws.PolicyWSDLParserExtension
to process all WS-Policy related extension elements in the WSDL.
com.sun.xml.ws.policy.jaxws.PolicyWSDLParserExtension
Table 1 gives the service provider file name and the
corresponding WSIT implementation class name.
This extension hook, defined by com.sun.xml.ws.api.wsdl.writer.WSDLGeneratorExtension
service provider, allows participation in runtime generation of extensibility elements
in the WSDL. This allows the registered handlers to
generate their own extensibility elements on various WSDL elements. Each method
is invoked with a com.sun.xml.txw2.TypedXmlWriter
parameter with an underlying WSDL element. This allows to add an attribute, declare a new
namespace URI or append a new child element to the underlying WSDL element. Each
method is passed the information required to generate the extensibility element.
com.sun.xml.ws.api.wsdl.writer.WSDLGeneratorExtension
com.sun.xml.txw2.TypedXmlWriter
For example, JAX-WS RI registers a runtime WSDL generation handler, com.sun.xml.ws.wsdl.writer.W3CAddressingWSDLGeneratorExtension
for generating WS-Addressing extension elements in the WSDL. The addOperationInputExtension
method is invoked when wsdl:portType/wsdl:operation/wsdl:input is
generated. The JAX-WS RI handler obtains the value of @javax.xml.ws.Action
annotation from the java.lang.reflect.Method and generates the
appropriate wsaw:Action attribute.
com.sun.xml.ws.wsdl.writer.W3CAddressingWSDLGeneratorExtension
addOperationInputExtension
java.lang.reflect.Method
Another example is where WSIT defines com.sun.xml.ws.policy.jaxws.PolicyWSDLGeneratorExtension
to generate all WS-Policy related extension elements in the WSDL.
com.sun.xml.ws.policy.jaxws.PolicyWSDLGeneratorExtension
Table 1: Service Provider file name and
Sample Implementations
In summary, the JAX-WS Reference Implementation provides a
feature-rich platform for middleware developers that allows them to extend the
capabilities of core functionality. The stand-alone JAX-WS implementation can be
downloaded from here. It is also
integrated in GlassFish v2 which can be downloaded from here.
Technorati: JAX-WS
Web services GlassFish
WSIT Extensiblity
Arun,
These capabilities are very useful.
I've been trying to add a WSDLGenerationExtension without success. I added the com.sun.xml.ws.api.wsdl.writer.WSDLGeneratorExtension to one of my app jar files in META-INF/services but the extension doesn't seem to be loaded.
I'm working with Tomcat 5.X and JAX-WS RI 2.1
Are there any known issues with SPI in JAX-WS 2 or any problem with any container?
Posted by: pablius on April 04, 2007 at 06:48 AM
Hi pablius,
I posted your question at:
Please follow it there.
Posted by: arungupta on April 09, 2007 at 04:56 PM
|
http://weblogs.java.net/blog/arungupta/archive/2007/02/jaxws_seiwsdl_p.html
|
crawl-002
|
en
|
refinedweb
|
To page through data in a control that implements the IPageableItemContainer interface, DataPager control can be used. GridView has its own paging and does not implement IPageableItemContainer interface.
To page through data in a control that implements the IPageableItemContainer interface, DataPager control can be used. GridView has its own paging and does not implement IPageableItemContainer interface. ListView is the only control that works with DataPager.
The DataPager control supports built-in paging user interface (UI). NumericPagerField object enables users to select a page of data by page number. NextPreviousPagerField object enables users to move through pages of data one page at a time, or to jump to the first or last page of data. The size of the pages of data is set by using the PageSize property of the DataPager control. One or more pager field objects can be used in a single DataPager control. Custom paging UI can be created by using the TemplatePagerField object. In the TemplatePagerField template, the DataPager control is referenced by using the Container property which provides access to the properties of the DataPager control. These properties include the starting row index, the page size, and the total number of rows currently bound to the control.
So let's begin extending the GridView with the IPageableItemContainer interface.
Define a new class as below/// <summary>
/// DataPagerGridView is a custom control that implements GrieView and IPageableItemContainer
/// </summary>
public class DataPagerGridView : GridView, IPageableItemContainer
{}
MaximumRows will be equal to the PageSize property/// <summary>
/// IPageableItemContainer's MaximumRows = PageSize property
int IPageableItemContainer.MaximumRows
{
get { return this.PageSize; }
}
StartRowIndex can be calculated from the PageSize and PageIndex properties/// <summary>
/// IPageableItemContainer's StartRowIndex = PageSize * PageIndex properties
int IPageableItemContainer.StartRowIndex
get { return this.PageSize * this.PageIndex; }. So set the Grid with appropriate parameters and bind to right chunk of data./// <summary>
/// Set the control with appropriate parameters and bind to right chunk of data.
/// <param name="startRowIndex"></param>
/// <param name="maximumRows"></param>
/// <param name="databind"></param>
void IPageableItemContainer.SetPageProperties(int startRowIndex, int maximumRows, bool databind)
int newPageIndex = (startRowIndex / maximumRows);
this.PageSize = maximumRows;
if (this.PageIndex != newPageIndex)
{
bool isCanceled = false;
if (databind)
{
// create the event arguments and raise the event
GridViewPageEventArgs args = new GridViewPageEventArgs(newPageIndex);
this.OnPageIndexChanging(args);
isCanceled = args.Cancel;
newPageIndex = args.NewPageIndex;
}
// if the event wasn't cancelled change the paging values
if (!isCanceled)
this.PageIndex = newPageIndex;
if (databind)
this.OnPageIndexChanged(EventArgs.Empty);
this.RequiresDataBinding = true;
}
For the DataPager to render the correct number of page buttons and enable/disable them, it needs to know the total number of rows and the Page Size. But this information is with the GridView and not known to the DataPager.As the GridView control inherits from CompositeDataboundControl which contains a variant of the CreateChildControls method that also takes in a property indicating whether the control is being bound to data or simply re-rendered. The GridView uses this method to bind to its data source and here we can place a trigger for the TotalRowCountAvailable event to be raised. Call base control's CreateChildControls method and determine the number of rows in the source then fire off the event with the derived data and then we return the original result.protected override int CreateChildControls(IEnumerable dataSource, bool dataBinding)
int rows = base.CreateChildControls(dataSource, dataBinding);
// if the paging feature is enabled, determine the total number of rows in the datasource
if (this.AllowPaging)
// if we are databinding, use the number of rows that were created, otherwise cast the datasource to an Collection and use that as the count
int totalRowCount = dataBinding ? rows : ((ICollection)dataSource).Count;
// raise the row count available event
IPageableItemContainer pageableItemContainer = this as IPageableItemContainer;
this.OnTotalRowCountAvailable(new PageEventArgs(pageableItemContainer.StartRowIndex,pageableItemContainer.MaximumRows,totalRowCount));
// make sure the top and bottom pager rows are not visible
if (this.TopPagerRow != null)
this.TopPagerRow.Visible = false;
if (this.BottomPagerRow != null)
this.BottomPagerRow.Visible = false;
return rows}
That's all you are done, put the control on aspx page and use DataPager with GridView Contol.
|
http://www.c-sharpcorner.com/UploadFile/nipuntomar/DataPagerGridView08012008123240PM/DataPagerGridView.aspx
|
crawl-002
|
en
|
refinedweb
|
> Maybe this is a FAO, but I'll give it a try anyway...
>
> I'm using the <xsl:attribute> to add new namspaces to my
> definitions tag.
The spec explicitly says you can't do this. Namespace declarations are
not attributes in the XSLT data model.
It works fine when I specify the namspace
> like this: <xsl:attribute""
> </xsl:attribute>
If it works fine, then your XSLT processor has a bug.
I would be interested to know why you are trying to add namespaces
dyamically. This is partly so that I can advise you how to solve your
problem, but it is also because the XML Query working group is currently
debating the requirements for creating namespaces in the result
document.
Michael Kay
Software AG
home: Michael.H.Kay@xxxxxxxxxxxx
work: Michael.Kay@xxxxxxxxxxxxxx
>
>:
>
>
XSL-List info and archive:
|
http://www.oxygenxml.com/archives/xsl-list/200207/msg01486.html
|
crawl-002
|
en
|
refinedweb
|
Feb. 13, 2004
GUEST COLUMN
Tax Freeze? More like
frostbite.
By David Newby
Less than six months after Wisconsin faced a massive
budget shortfall resulting in many serious program cuts, several Republicans are pushing a
radical plan to cap state and local revenues. Rep. Frank Lasee, among others, is trying to
import a complicated constitutional amendment from Colorado the so-called
Taxpayers' Bill of Rights (TABOR), which basically freezes real per capita public spending
permanently.
Once again, with this TABOR, Republicans are attempting
through state regulation to micro-manage the ability of local and state governing bodies
to determine the variety and quality of public services available to residents, and the
quality of public life available to us all. No one enjoys paying taxes, but with the
relentless demonization of taxes as somehow the enemy of prosperity, we have lost sight of
the role of government in creating a civil society in which all can reside safely and with
some minimum economic security.
Wisconsin promoters of the TABOR plan claim it will be a
silver bullet for Wisconsin's economic problems. According to an editorial by Jim Haney of
Wisconsin Manufacturers and Commerce, the low taxes resulting from TABOR made Coloradoans
richer, more productive and increased job growth. Unfortunately, like many pipedreams,
the facts don't bear out the claim. Governing Magazine - an award winning,
non-partisan journal read by 85,000 state and local policymakers - reports that TABOR
"has complicated Colorado's fiscal life so much that some of its supporters have
soured on it. 'In hindsight', says Republican Senator Ron Teck, 'I wouldn't vote for it
again' ".
There have been major cuts in funding for Colorado cities
and counties, so they have increased local option sales taxes. These taxes vary in rate
and exemption rules from one community to another, so businesses have to navigate
different costs in each. Many companies say that Colorado "is a nightmare to do
business with," according to Phyllis Resnick from the Tax Center at the University of
Denver. Colorado's bond rating has declined, which forces Colorado taxpayers to pay more
in state interest payments. And since the end of 2000, Colorado lost 80,000 jobs, a rate
300% higher than the U.S. average.
The revenue caps and added tax cuts leave Colorado a major
deficit which it cannot address because of the permanent tax freeze. Cuts in public
services have already been dramatic. According to the Governing Magazine report card,
Colorado is now one of the ten trouble spots in the United States in terms of children's
health care. Education funding as a percent of income is currently the lowest in the
country.
Wisconsin TABOR promoters say that the Colorado tax freeze
generated economic development. But, according to data from the Bell Policy Center (a
state policy research center), Colorado's growth was part of a regional boom that began
before TABOR was passed. The other fast growing states - Arizona, Utah, Idaho and Nevada -
did not have these restrictive revenue caps. "Climate, environment, lifestyle, clean
industry and diversifying economies were the economic drivers," states Carol Hedges,
author of Ten Years of TABOR, a comprehensive assessment of the policy.
The citizens of Colorado have learned that while the TABOR
tax freeze sounded like a good idea, in fact it is a painful and inflexible way to control
taxes which actually undermines the state economy, transportation and education systems,
and leaves working people, children, the sick and the elderly out in the cold. This
so-called taxpayer bill of rights is a destructive bill of goods that will cost Colorado
residents dearly for years to come. Wisconsin would be wise to dodge this bullet.
Newby is president of the Wisconsin
AFL-CIO
Reprinted with permission of the
author.
|
http://www.wiscities.org/TABOR-Newby.htm
|
crawl-002
|
en
|
refinedweb
|
Java Supplants Scripting
In the early days of the enterprise Java standard (what was called J2EE and is now called Java EE), many Web site designers employed server-side scripting to implement functionality. Available tools and software made it easy to develop and deploy this type of business logic, but they had two huge drawbacks: poor performance and little debugging assistance. Server-side scripts, such as those implemented as JavaScript, are not compiled; they're interpreted as they execute. This leads to poor performance and little to no scalability.
The emergence of the Java Servlet specification and, subsequently, JavaServer Pages (JSP) put an end to all of that. Because these server-side technologies were based on pure Java, tools emerged to help you debug the code. Additionally, since the Java code could be compiled (thanks to the HotSpot compiler), performance and scalability were excellent. This marked the end of mainstream server-side scripting.
The Return of Scripting Languages
With the advent of Asynchronous JavaScript and XML (AJAX)-based rich Web applications, scripting languages have made a comeback. This time, the script is meant to run at the client, where scalability generally isn't an issue (each client runs its own browser and hence its own scripts). The result is a very dynamic Web page that includes rich application features with acceptable performance.
Further reasons to use scripting languages in a Web application include dynamic type conversion (automatic conversions from values to strings), access to the operating system environment (as with shell scripts), and the use of specialized Web frameworks for scripting languages. For these reasons, dynamic scripting languages have reemerged on the server as well.
However, this reintroduction of scripting languages has left programmers wanting the following:
Java SE 6 satisfies both of these requests with JSR 223 (Scripting for the Java Platform). When you join the features of both the Java language and available scripting languages, you can pick and choose the strengths of both environments to use at the same time. For instance, Java developers can access Perl scripts to perform string operations that are best done with Perl. Additionally, AJAX developers can invoke Java objects directly from script embedded within a Web page to perform complex operations. For example, since database access is far less robust (if not impractical) from JavaScript as compared with Java, you can perform this and other complex operations in Java code and simply invoke the Java code from your page's script.
Java SE 6 ships with the Mozilla Rhino scripting engine, but you're free to substitute any available scripting engine that complies with JSR 223. (For a list of JSR 223-compliant script engines, click here.) This includes implementations of Python and Ruby. The new javax.script APIs provide access to the scripting environments from Java. For instance, the following code iterates through the list of available scripting engines and outputs the language types and the associated engines:
import java.util.*;
import javax.script.*;
public class Main
{
public static void main(String[] args)
{
try {
ScriptEngineManager mgr = new ScriptEngineManager();
List<ScriptEngineFactory> factories = mgr.getEngineFactories();
System.out.println("Available script engines:");
for ( int i = 0; i < factories.size(); i++ )
{
ScriptEngineFactory factory = factories.get(i);
String engine = factory.getEngineName();
String language = factory.getLanguageName();
System.out.println("-------------------------------------------");
System.out.println("Language: " + language );
System.out.println("Engine: " + engine);
System.out.println("-------------------------------------------");
}
}
catch ( Exception e ) {
e.printStackTrace();
}
}
}
JSR 223-compliant scripting engines must implement the javax.script.ScriptEngine interface and be packaged in JAR files with a META-INF/services/javax.script.ScriptEngineFactory text resource. Once you deploy a compliant scripting engine JAR file into your Java environment or with your application, you can access this engine from your Java code. To do this, simply include the engine's JAR file within your application's classpath.
Let's examine the Rhino JavaScript engine's Java scripting features with some simple examples.
WebMediaBrands Corporate Info
Legal Notices, Licensing, Reprints, Permissions, Privacy Policy.
Advertise | Newsletters | Shopping | E-mail Offers | Freelance Jobs
|
http://www.devx.com/Java/Article/33206/0
|
crawl-002
|
en
|
refinedweb
|
The JMX API includes the possibility to create "Dynamic
MBeans", whose management interface is determined at run time.
When might that be useful? Here's an example.
In the JMX
forum on the Sun Developer Network, Athar asks
how to load a properties file in a dynamic MBean. I think
that's an excellent question, because it's exactly the example I
usually use for "Runtime" Dynamic MBeans.
What I call a "Runtime" Dynamic MBean is one whose management
interface you cannot determine by looking at the source code.
Obviously Standard MBeans aren't like this, because you can just
look at WhateverMBean.java to see what the
management interface is going to be. This is still true for
MBeans constructed using the StandardMBean
class, and it's also true for MXBeans.
WhateverMBean
It isn't necessarily true for Dynamic MBeans. A Dynamic MBean
is a Java object of a class that implements the DynamicMBean
interface. This interface includes a method getMBeanInfo().
A class that implements DynamicMBean can construct the MBeanInfo
object that it returns from getMBeanInfo() however it likes. It
can even return a different MBeanInfo every time it is
called!
This flexibility is almost never necessary. Nearly always,
when you create a Dynamic MBean, it is because you want to add
extra information to the MBeanInfo, or because you want to
implement the logic to get an attribute or call an operation in
some particular way. Just like dynamic
code generation, my advice if you are considering making a
Runtime Dynamic MBean is to think really hard about whether you
couldn't redesign things so that the interface is known at
compile time. The problem with an MBean interface only known at
run time is that it's hard for a client to interact with it.
Suppose your client wants to call getAttribute
on your MBean. The only way it can know what attributes are
available is to call getMBeanInfo
beforehand. If the MBean's interface can change as it is
running, even this isn't guaranteed to work!
However, there are some cases where it makes a certain amount
of sense to have a Runtime Dynamic MBean, and Athar's question
suggests one of them. Suppose you have a properties file
containing configuration for your application, and you'd like to
expose its contents for management, so that you can see the
values of configuration items, and perhaps change them as the
application is running. The obvious way to do this is to have a
ConfigurationManagementMBean that is linked to the properties
file.
Every time you change your app to add a
new configuration item, you'll need to add it to the initial
configuration file, and you'll need to add code to interpret it.
But it would be a pain to have to add a new attribute explicitly
to the ConfigurationManagementMBean as well. So this argues for
one of two approaches:
Properties
Map<String,String>
Map
If you adopt the second approach, then JConsole looking at your
ConfigurationManagementMBean might look like this:
I'll present the code to implement this below. A few things
are worth noting. First of all, the DynamicMBean interface is a
little bit clunky, in particular the
getAttributes and (especially)
setAttributes methods. The problem that generates this
clunkiness is what to do if one of the attributes to be set
produces an error. Should you throw an exception? If so, have
any of the other attributes been set? The cleanest solution
would be to say that setAttributes is an all-or-nothing
operation: either it sets all of the given attributes, or it
sets none of them and throws an exception. However, the
designers of the JMX API felt that this was a harsh constraint
to put on MBean writers. What's more it is not at all obvious
how it should apply to Standard MBeans. So instead,
setAttributes returns an AttributeList containing the attributes
that were actually set. The caller needs to check that this
contains all the values that were supposed to be set, and react
appropriately if not.
The code doesn't let you set a value for a property that was
not already present. The MBean Server does not check that the
attribute name in setAttribute is present in the MBeanInfo. It
is up to the MBean to do that. An MBean could choose to accept
such a name, which in this case would allow you to define new
properties. But I think it would be better to achieve that in
some other way, for example an explicit addProperty
operation.
In addition to one attribute per property, I've defined an
operation reload which reloads the properties from
the file. If there are properties in the file that were not
present before, then they will appear as new attributes. Notice
that adding an operation requires you both to mention it in
getMBeanInfo and to recognize it in
invoke. If there are many operations, you might
want to consider getting the StandardMBean
class to do some of the work for you.
reload
getMBeanInfo
invoke
Finally, every time you change a property the code updates the
configuration file. The way it does this is intended to be a
safe way to update a file. It writes a new properties file in
the same directory, then renames it over the original. On most
operating systems, renaming is atomic, so even if your app is
interrupted in the middle of this operation, you will end up
with either the old file or the new file, but not with a missing
or partially-written file.
package propertymanager;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Iterator;
import java.util.Properties;
import java.util.SortedSet;
import java.util.TreeSet;
import javax.management.Attribute;
import javax.management.AttributeList;
import javax.management.AttributeNotFoundException;
import javax.management.DynamicMBean;
import javax.management.InvalidAttributeValueException;
import javax.management.MBeanAttributeInfo;
import javax.management.MBeanException;
import javax.management.MBeanInfo;
import javax.management.MBeanOperationInfo;
import javax.management.ReflectionException;
public class PropertyManager implements DynamicMBean {
private final String propertyFileName;
private final Properties properties;
public PropertyManager(String propertyFileName) throws IOException {
this.propertyFileName = propertyFileName;
properties = new Properties();
load();
}
public synchronized String getAttribute(String name)
throws AttributeNotFoundException {
String value = properties.getProperty(name);
if (value != null)
return value;
else
throw new AttributeNotFoundException("No such property: " + name);
}
public synchronized void setAttribute(Attribute attribute)
throws InvalidAttributeValueException, MBeanException, AttributeNotFoundException {
String name = attribute.getName();
if (properties.getProperty(name) == null)
throw new AttributeNotFoundException(name);
Object value = attribute.getValue();
if (!(value instanceof String)) {
throw new InvalidAttributeValueException(
"Attribute value not a string: " + value);
}
properties.setProperty(name, (String) value);
try {
save();
} catch (IOException e) {
throw new MBeanException(e);
}
}
public synchronized AttributeList getAttributes(String[] names) {
AttributeList list = new AttributeList();
for (String name : names) {
String value = properties.getProperty(name);
if (value != null)
list.add(new Attribute(name, value));
}
return list;
}
public synchronized AttributeList setAttributes(AttributeList list) {
Attribute[] attrs = (Attribute[]) list.toArray(new Attribute[0]);
AttributeList retlist = new AttributeList();
for (Attribute attr : attrs) {
String name = attr.getName();
Object value = attr.getValue();
if (properties.getProperty(name) != null && value instanceof String) {
properties.setProperty(name, (String) value);
retlist.add(new Attribute(name, value));
}
}
try {
save();
} catch (IOException e) {
return new AttributeList();
}
return retlist;
}
public Object invoke(String name, Object[] args, String[] sig)
throws MBeanException, ReflectionException {
if (name.equals("reload") &&
(args == null || args.length == 0) &&
(sig == null || sig.length == 0)) {
try {
load();
return null;
} catch (IOException e) {
throw new MBeanException(e);
}
}
throw new ReflectionException(new NoSuchMethodException(name));
}
public synchronized MBeanInfo getMBeanInfo() {
SortedSet<String> names = new TreeSet<String>();
for (Object name : properties.keySet())
names.add((String) name);
MBeanAttributeInfo[] attrs = new MBeanAttributeInfo[names.size()];
Iterator<String> it = names.iterator();
for (int i = 0; i < attrs.length; i++) {
String name = it.next();
attrs[i] = new MBeanAttributeInfo(
name,
"java.lang.String",
"Property " + name,
true, // isReadable
true, // isWritable
false); // isIs
}
MBeanOperationInfo[] opers = {
new MBeanOperationInfo(
"reload",
"Reload properties from file",
null, // no parameters
"void",
MBeanOperationInfo.ACTION)
};
return new MBeanInfo(
this.getClass().getName(),
"Property Manager MBean",
attrs,
null, // constructors
opers,
null); // notifications
}
private void load() throws IOException {
InputStream input = new FileInputStream(propertyFileName);
properties.load(input);
input.close();
}
private void save() throws IOException {
String newPropertyFileName = propertyFileName + "$$new";
File file = new File(newPropertyFileName);
OutputStream output = new FileOutputStream(file);
String comment = "Written by " + this.getClass().getName();
properties.store(output, comment);
output.close();
if (!file.renameTo(new File(propertyFileName))) {
throw new IOException("Rename " + newPropertyFileName + " to " +
propertyFileName + " failed");
}
}
}
Thanks Eamonn.. This was exactly the one i was looking for. Infact to load the properties file dynamically, so that it can be reloaded when the properties file is edited when exposed by the Mbean. Though i'd nt specified what exactly i wanted, you could get what i intended to tell.. thanks a lot !
Athar
Posted by: athahar on November 09, 2006 at 02:00 AM
Hi Eamonn,
I wonder if you could help me with this:
I created a dynamic MBean similar to the one you are presenting here, and I registered in a local MBean server.
Then, from a remote machine, I opened an RMI MBeanServerConnection to that MBean server, and created a proxy to the dynamic MBean (using the MBeanServerInvocationHandler.newProxyInstance() method).
In that remote machine, I also have a remote MBean server, and I'd like to register the proxy as an MBean in that server. The problem is how to obtain the interface for the dynamic MBean at runtime. I tried to get it using mbsc.getMBeanInfo(MBeanName) , but this does not return an interface that I can use in server.registerMBean() method.
Could you provide any hint ?
Thanks,
Jaime.
Posted by: jimbojava on January 22, 2007 at 08:37 AM
jimbojava, the short answer is that you can't register the object you get from MBeanServerInvocationHandler.newProxyInstance (or JMX.newMBeanProxy in Java 6) in the MBean Server. Instead you need to use a class something like this:
public class DynamicMBeanProxy implements DynamicMBean {
private final MBeanServerConnection mbsc;
private final ObjectName objectName;
/** Creates a new instance of DynamicMBeanProxy */
public DynamicMBeanProxy(MBeanServerConnection mbsc, ObjectName objectName) {
this.mbsc = mbsc;
this.objectName = objectName;
}
public Object getAttribute(String name)
throws AttributeNotFoundException, MBeanException, ReflectionException {
try {
return mbsc.getAttribute(objectName, name);
} catch (IOException e) {
throw new MBeanException(e);
} catch (InstanceNotFoundException e) {
throw new MBeanException(e);
}
}
...other methods from DynamicMBean similarly...
}
This should work in the simple case you are talking about, but there are interesting issues with performance and notifications. I am planning to write a blog entry with the full story.
Posted by: emcmanus on January 22, 2007 at 10:01 AM
Thank you, that makes a lot of sense.
And thank you also for mentioning the performance issues, it is my main concern here.
I'm trying to use a "master" MBean server to centralize management of MBeans from "subordinate" MBean servers. The idea is to manage a client application (always the same, an application all employees use), running on desktop/laptop computers. That application would contain the embedded subordinated MBean server. So the subordinate MBean servers can be of the order of thousands, each having just one or two MBeans (the same MBeans, for example the PropertyManager), and the master MBean server would ask for values of certain properties to produce statistics, or would set the value of the properties, or it could launch a window in the application to notify all employees about some specific issue, etc.
Thanks again,
Jaime.
Posted by: jimbojava on January 22, 2007 at 02:41 PM
Hi Eamonn, I wonder if you could help me with this: - This example deals with property file. What if I want to achieve the same thing with XML file. Problem is that in my configuration.xml there are sub nodes used. I have api for reading that xml file but there is no generic method like getPropertyByName() so that I can place it in dynamic mbean. So my question is: how to handle such situation where you have different methods to read the items from main nodes and subnodes??
Will it be necessary to have a method of kind getPropertyByName() ??
But what if two sub nodes are having similar attributes?? e.g.
LogConfiguration
----Logger
-------LoggerName v="DEFAULT"
-------LogLevel v="INFO"
---- /Logger
-----Logger
-------LoggerName v="CONFIG"
-------LogLevel v="WARN"
-----/Logger
-----Logger
-------LoggerName v="AUTH"
-------LogLevel v="DEBUG"
-----/Logger
/LogConfiguration.
Posted by: pks_chennai on August 03, 2007 at 05:03 AM
Thanks,
very useful and interestingexample
Posted by: lukebike on November 21, 2007 at 03:53 AM
Hi Eamonn,
Thanks a lot for this example, it really has helped me out. However, I am getting strange behavior when I try to invoke methods on my MBean.
Basically, I found I had to add method handling in invoke() for all methods exposed in DynamicMBean interface, and even when I did that the setAttribute(Sttribute) method for some reason is not going through invoke(). Also, this non-invoke()-based invocation is getting passed an Attribute object with name set to "Attribute" and value set to "javax.management.Attribute@", rather than what I am setting on the client, so it is not updating the property correctly.
I have no idea why this strange behavior is happening... I posted more details here:
Thanks in advance for any advice/direction you might be able to give!
cheers,
Doug
Posted by: doug_harley on May 28, 2008 at 03:45 PM
never mind, i figured out my problem: incorrect use of APIs. DOH!
i am not sure why my mangled code was partially working, it must have been a miracle. actually, it was more like a curse cause if it had just failed miserably i would have realized quicker i was on wrong track. i should have reviewed the JSR at start instead of hacking away...
thanks again for this dynamic mbean properties file manager example, awesome stuff.
Posted by: doug_harley on May 28, 2008 at 11:12 PM
|
http://weblogs.java.net/blog/emcmanus/archive/2006/11/a_real_example.html
|
crawl-002
|
en
|
refinedweb
|
Tuesday, July 29, 2008
posted @ Tuesday, July 29, 2008 12:56 PM | Feedback (0) |
Filed Under [
OT Rants
]
Tuesday, July 17, 2007
I'm an e-vangelist, apparantly... I envy all the e-experts here on GWB :-D
your e-score: 78
your e-group: e-vangelist
your e-ranking: 1040/9733
Check your own score at.
posted @ Tuesday, July 17, 2007 2:24 PM | Feedback (2) |
Filed Under [
OT Rants
]
Friday, June 15, 2007
- a great little list from Phil Haack (and be sure to read comments as well)...
posted @ Friday, June 15, 2007 3:30 PM | Feedback (0) |
Tuesday, May 08, 2007
posted @ Tuesday, May 08, 2007 11:01 AM | Feedback (0) |
Tuesday, April 03, 2007
As I'm sitting here, my wrists are hurting slightly, due to the fact that at my new job (well, I'm back at my old workplace, but with a new title and all that) I don't have my normal keyboard. Also, my "new" chair sucks big time, so it's all agony ;-)
So I thought I'd share an old link with you guys: Jon Galloway : Mouseless Computing - although I know it sounds a bit backwards, asking you to use the keyboard more with my current keyboard being the culprit, it's a great post nevertheless, that should be as widely spread as possible...
posted @ Tuesday, April 03, 2007 9:10 AM | Feedback (0) |
Monday, February 26, 2007
Whaaat? This screen appeared in Visual Studio 2005, right after I deleted a server control from HTML view and saved the file (not the designer-file, the aspx-file). And it just stayed there. No cancel, no way back into Visual Studio what so ever!? I wonder if I'm missing a service pack :-s
Ended up terminationg the VS-process manually. Bummer...
posted @ Monday, February 26, 2007 9:02 PM | Feedback (7) |
Wednesday, January 03, 2007
posted @ Wednesday, January 03, 2007 4:31 AM | Feedback (0) |
Thursday, December 14, 2006
Pro users no longer have upload limits - and regular users now have a 100MB limit each month, instead of the 20MB they used to:
That's damn good news for me, having just bought a new 10,1MP camera - with a 2GB memory card...
posted @ Thursday, December 14, 2006 8:45 AM | Feedback (0) |
Friday, December 01, 2006
Have a look at the top 20 lessons learned by JD in his 20 years of programming:
[Via Software by Rob]
posted @ Friday, December 01, 2006 1:10 PM | Feedback (0) |
Friday, November 10, 2006
posted @ Friday, November 10, 2006 2:01 PM | Feedback (0) |
Wednesday, November 08, 2006
I just installed a "funny" little (free) plugin for VS.2005. It's called SlickEdit Gadgets - I haven't really figured out yet if it can help me be more productive, but try it out for yourselves:
Download the free SlickEdit Gadgets thingy here
posted @ Wednesday, November 08, 2006 12:11 PM | Feedback (2) |
Tuesday, November 07, 2006
In case you haven't heard, Microsoft just released the WinFX/.NET 3.0 Framework into RTM...
posted @ Tuesday, November 07, 2006 10:07 AM | Feedback (0) |
T @ Tuesday, October 31, 2006 9:40 AM | Feedback (0) |
Monday, September 18, 2006
posted @ Monday, September 18, 2006 5:34 PM | Feedback (0) |
I hate command prompts.
I know - I'm supposed to be a geek and all, and therefore love prompts and stuff. But I just hate them anyway. So I was very pleased when I stumbled across andrewconnell.com, where I found a very nifty trick for avoiding some command prompt hell...
BTW, the article says it's a VS2005 trick, but it works fine for me in VS2003...
posted @ Monday, September 18, 2006 9:42 AM | Feedback (0) |
Wednesday, August 02, 2006
posted @ Wednesday, August 02, 2006 4:33 PM | Feedback (0) |
Wednesday, June 28, 2006
posted @ Wednesday, June 28, 2006 8:31 AM | Feedback (0) |
Wednesday, June 14, 2006
posted @ Wednesday, June 14, 2006 8:34 AM | Feedback (0) |
The hard work, the countless hours and endless nights. I am now finally a respected blogger (not like back in the old days):
My blog is worth $1,129.08.How much is your blog worth?
Hehe ;-)
posted @ Wednesday, June 14, 2006 6:35 AM | Feedback (1) |
Friday, June 02, 2006
posted @ Friday, June 02, 2006 6:24 AM | Feedback (0) |
Tuesday, May 16, 2006
So, I finally figured out why my blog didn't have CAPTCHA (why am I constantly writing CAPTHCA?) for the comments. It turns out that Jeff (Julian) had forgotten to add it to the skin I'm using here - I wish I had known before I manually deleted something like 60 spam comments, all from the same stupid [product name] moron.
I had been looking for the CAPTHCA CAPTHCA CAPTCHA (I hate that "word") option for ages, but didn't find it. Because it isn't there. So I asked Chris Williams what he'd done to get it, and he directed me to Jeff.
Thanks, guys!
posted @ Tuesday, May 16, 2006 12:33 PM | Feedback (1) |
Thursday, May 11, 2006
posted @ Thursday, May 11, 2006 8:50 AM | Feedback (0) |
Wednesday, May 03, 2006
What I've learned today.
using System;
using System.Collections.Generic;
public class MyClass
{
public static void Main()
{
bool exclusive;
Mutex m = new Mutex(true, "C4F-TrickedOut", out exclusive);
if( exclusive )
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Form1 mainForm = new Form1();
Application.Run();
}
else
{
MessageBox.Show("Another instance is already running.");
}
}
}
posted @ Wednesday, May 03, 2006 12:18 PM | Feedback (2) |
Friday, March 31, 2006
posted @ Friday, March 31, 2006 7:53 AM | Feedback (5) |
."
<OBJECT>
posted @ Friday, March 31, 2006 6:28 AM | Feedback (0) |
|
http://geekswithblogs.net/jannikanker/Default.aspx
|
crawl-002
|
en
|
refinedweb
|
Have you seen this unofficial Java API for Google Translator? So, I was thinking that it could be interesting to mix this stuff with Java Speech API (JSAPI).
JSAPI was divided to support two things: speech recognizers and synthesizers. Speech synthesis is the process of generating human speech from written text for a specific language. Speech recognition is the process of converting human speech to words/commands. This converted text can be used or interpreted in different ways (interesting and simple definition from this article).
It seems that there are not too many open source projects that cares about the recognizer part. I have found a very interesting one called Sphinx, but did not have time to try it yet. I was thinking how cool would be to have an open source software to make possible a talk between two different people, like you say something in a language, it translates to another language and say that. Have anybody seen anything like that? Non commercial? For voip?
So, I work in part of a demo, but only with the the synthesizer part, the text input is manually. I used FreeTTS for that. Basically this piece of code gets an word input in Portuguese, translates it to English and then say the word.
package speech;
import com.google.api.translate.Language;
import com.google.api.translate.Translate;
import com.sun.speech.freetts.Voice;
import com.sun.speech.freetts.VoiceManager;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Main {
public static void main(String[] args) {
VoiceManager voiceManager = VoiceManager.getInstance();
Voice voice = voiceManager.getVoice("kevin16");
voice.allocate();
String text = null;
do {
try {
System.out.println("Type a word in Portuguese and listen it in English -> ");
text = new BufferedReader(new InputStreamReader(System.in)).readLine();
voice.speak(Translate.translate(text, Language.PORTUGESE, Language.ENGLISH));
} catch (Exception ex) {
ex.printStackTrace();
}
} while (!text.equals("!quit"));
voice.deallocate();
System.exit(0);
}
}
If you got excited to work in an open source project like that, go ahead and write something! I can guarantee your fun!
Really cool idea !
Don't know any app doing that, but would be interested to contribute in a such project !
Lot of possibility, but I wonder how this can be efficient, cause automatic translation fail on grammar syntax ...
Posted by: alois on November 11, 2008 at 07:13 AM
Hello alois, I wish I had time right now to start a project like that. Anyway, if anyone got interested, I can help in some way too. We definitely would find problems with the grammar syntax, but we still can start something... Cheers!
Posted by: brunogh on November 14, 2008 at 05:21 AM
Hai i tried this freetts example, but audio file is not detecting. Tell me how can i overcome this problem....
Posted by: chandrasekar85 on February 16, 2009 at 03:46 AM
Hello, please make sure if you have setted it right. Otherwise try its users list.
Posted by: brunogh on February 16, 2009 at 04:20 AM
|
http://weblogs.java.net/blog/brunogh/archive/2008/11/playing_with_tr.html
|
crawl-002
|
en
|
refinedweb
|
I spent a couple of hours today looking at the problem we encountered yesterday with the xpath() function in BizTalk. We were attempting to use an XPath to extract the value of a nested element and assign it to a string variable. The nested element (<Direction>) was in the global (anonymous) namespace, but was a child of an element in a named namespace. The code failed on XmlSerializer de-serialisation with an error saying "<Direction xmlns=''> not expected".
At first, I strongly suspected that the problem was due to the use of the anonymous namespace, especially as the element was defined in a schema that was then imported into the message schema. Both schemas had the 'Element FormDefault' attribute set to default (unqualified), and we had ended up with a horrible interleaved mess of elements, some of which were in defined namespaces, whilst others were in the global namespace. Always, always set Element FormDefault to 'Qualified' in your schemas, unless you have a really good reason not to. We are now in the process of changing all our schemas to eliminate the use of global namespaces.
After a while, I convinced myself that there was a problem with the .NET XmlSerializer, especially after reading a number of news group threads on the Internet. From what I read, it seemed that the XmlSerializer might have problems with "xmlns=''" declarations. To prove the point, I wrote a little code to reproduce the issue. Unfortunately, however, the XmlSerializer worked perfectly, whatever I did. It seems that it has no issues at all with "xmlns=''".
I then spent some time with Reflector, digging into the BizTalk code to see what it was doing, and eventually I discovered the problem...and it was all our fault!!
We had a line of code in the orchestration that attempted to use the BizTalk xpath() function to return the value held in the <Direction> element and assign it to a string variable. The only problem was that the XPath was addressing the <Direction> node, and not its contents. The XPath processor uses XML DOM internally, and expressions return XML nodes or node sets. In our case, the XPath we used was returning an XmlElement node from our XML message. We were trying to assign this to a string variable (doh)!
This very basic XPath error was made opaque by the fact that we got no compile-time error, but instead suffered an run-time exception thrown by the XmlSerializer class. You might reasonably expect that the exception would indicate some kind of type mismatch or cast failure. Instead, we got a general-purpose XmlSerializer exception statibg that the XML content was unexpected. The reason for this becomes clear when we looked a little deeper into what is happening when we call the xpath function. BizTalk generates C# code for the line that calls the xpath function. The generated code calls a method to which it passes the Type of the variable to which the result of the method will be assigned. This Type is used to initialise an instance of XmlSerializer used internally by BizTalk. In our case, we ended up with an instance of XmlSerializer that attempted to de-serialise an XmlElement (addressed by the XPath) as a string object. This, of course, didn't work, and the XmlSerializer returned an exception saying that the XmlElement 'was not expected'.
The XmlSerializer object is used to deserialise XML content obtained using XPathNavigator and XmlNodeReader. In a reversal of the principle of strong typing, BizTalk's XLang/s langauge effectively 'trusts' the developer to select the right type of variable to hold the results of the xpath() function. If the developer gets this wrong (as we did), the code blindly attempts to perform de-serialisation to the incorrect type, and fails at run time.
The best fix in our case was simply to extend the end of the XPath with '/text()'. This returns the text node contained in the <Direction> node (XML DOM sees the text node as a nested node). This nicely de-serialises to a string. Another option would be to retain the XPath as is, but assign the result to an XmlElement variable, and then extract the inner text. This is less direct, and may require an additional atomic scope because XmlElement is not serialisable.
So, the problem was nothing to do with the use of anonymous namespaces or the .NET XmlSerializer. It was a basic logical error in our own code that took ages to spot due to the exception that was raised. If you use the xpath function and get an "<xxxxx xmlns=''> not expected" exception, check your XPath to see what it is actually returning, and the variable you are assigning the results to.
Skin design by Mark Wagner, Adapted by David Vidmar
|
http://geekswithblogs.net/cyoung/archive/2006/12/12/100981.aspx
|
crawl-002
|
en
|
refinedweb
|
By: Kenneth A. Faw
Abstract: This paper will provide the information needed to successfully guide individuals towards JBuilder certification.
Since the Study Guide is broken into four main categories, we will do
the same. The summary will also provide information about the Borland
Partnership Programs at the date of this writing.
The following list provides quick links to each section that follows:
There are also many books available about JBuilder, but my recommendation
is that you start with a solid foundation in Java, take the official course,
and then browse through the JBuilder manuals for anything else you might
have missed. The course covers most of what you need. If you would like
resources on Java, consider the following:
(back to intro)
One of the features that makes Java portable is its treatment of primitive
types. Whereas most languages define primitives types in terms of the native
CPU size, Java defines all primitives to be the same, regardless of deployment
platform. That means that an integer in Java (represented by the int
type) is 32 bits, whether the native operating system is 32 bits or 64
bits. The results is that files written from Java on one OS should be readable
into Java on a different OS without concern for formatting. (You should
also learn the primitive types, but that is part of section 3.)
Also important to the production use of Java is the advent of the JIT
compiler. A JIT compiler is a Java Virtual Machine that loads and interprets
Java byte codes just as any other virtual machine. However, the JIT compiler
caches the resulting machine code for quicker execution at a later time,
rather than working in a strictly interpretive mode. Recently, JIT compilers
have appeared that use predictive algorithms to look ahead of your process
and compile Java byte code before you need it, enhancing performance
even more. (Sun's HotSpot Performance Engine is an example of such a machine.)
Deprecation is an unrelated term that refers to methods and classes
in a previous version of the JDK which were replaced by newer substitutes
in the current version. Again, it is a compiler option that generates warnings
whenever deprecated methods are used in your code. It is a good idea, in
general, to remove references to deprecated methods in favor of their newer
alternatives, since deprecated methods may be completely removed from a
subsequent JDK release.
Often, a deprecated method is replaced by more than one method, or by
an entire collaboration of classes, so replacing the method call may not
be trivial. In other cases, methods are deprecated without a suggested
alternative. Such is the case with the stop method and other related
methods of the Thread class. Explicitly stopping a Thread can be dangerous
and lead to deadlocks because the Thread does not have a chance to release
its resources or other locks before termination. During the tutorial, you
will get an alternative pattern you can use. If you wish, here is some
space to take notes:
You are welcome.
Another feature of the JDK is its support for Unicode characters. Unicode
characters are 16 bits, rather than the typical ASCII character set which
is 8 bits. (ASCII is also called Latin-1, or ISO 8850-1, and is also a
standard.) As a larger character format, Unicode supports 65,535 characters,
providing Java international language capabilities. All characters manipulated
within the JVM are in Unicode, and Java provides methods for reading and
writing Unicode text files. Most ASCII manipulation is not affected by
Unicode, since the printing ASCII characters map directly to Unicode equivalents.
Finally in this section, we have to talk about Java Servlets. A topic
in itself, servlets are one of the current rages of Java programming today.
Servlets allow us to extend the functionality of our web servers, returning
any HTTP response given any HTTP request. Servlets are not required to
communicate via HTML, and extensive web applications can be constructed
using servlets as a mechanism to tie web clients into your internal Java
distributed object bus (if available).
To write a servlet, you typically override a method called doGet
or doPost, or both, depending on what your servlet intends to do.
For more general servlet requests, you can always override the default
service
method instead. Typically, the web server calls the service method,
which checks the type of the request, and either calls doGet or
doPost.
Most servlet methods take an HTTPRequest and HTTPResponse object as parameters,
which they use to communicate with the web server to handle the request.
When writing code for a class in package p1, you may need to reference
a class defined in package p2. By placing an import declaration
at the top of your source code file similar to:
import p1.*;
- or -
import p1.ClassName;
import p1.*;
- or -
import p1.ClassName;
In the event that an import statement results in an ambiguity to the
compiler, you will have to use the fully qualified class name to refer
to either class in question. For example, the following code is ambiguous
and will not compile as-is:
1: import org.omg.CORBA.*;
2: public class C {
3: ...
4: public Object f() {
5: ...
6: return null;
7: }
8: }
1: import org.omg.CORBA.*;
2: public class C {
3: ...
4: public Object f() {
5: ...
6: return null;
7: }
8: }
Consider the following declarations:
Object o1, o2;
Object o1, o2;
The JVM implements a garbage collector as a low priority thread that
scans for unused objects that have been allocated-- that is, those objects
with no current references. When an object has no current references, the
garbage collector will attempt to restore the memory allocated to the object
to the heap.
Of particular interest is the methods available in Java to compare references.
The equality operator is normally applied to primitive types as follows:
(x
== y) is true if and only if x and y have the same
value. For reference types, (x == y) is true if
and only if x and y refer to the same object on the heap. In pointer parlance,
you might say that x and y point to the same address in memory.
Many times we are more concerned whether the objects that x and y refer
to are equivalent (they are the same, member for member). In that case,
we use the equals method found in every Java object: (x.equals(y))
is true if the object y refers to is equivalent in value
to the object x refers to. Note that x.equals(y) does not imply
y.equals(x): the Java programmer can define in their equals
method what it means for another object to be equivalent. By default, the
equals
method does a member-for-member comparison using the == operator.
Incidentally, it is strongly recommended that if two objects references
are ==, they should always be equals as well. Again, this
is not required, but don't think too long about it... it should always
hold true.
Java classes also have visibility modifiers: a class may be either public
or not (as before, no modifier implies package-level visibility). For any
public
class, the Java compiler requires that its source code file have the same
name as the class. This implies that there can be only one public
class per Java source file. There is no limitation of this sort for non-public
classes.
Go ahead and cover the table above, then try this quiz:
Given the following diagram:
a) What is the visibility of class A? ________________
b) Which of class A's members is visible to each of the following classes?
A
A1
A2
B
C
private
(package-level)
protected
public
So if an object hides its data inside and provides only valid methods
for manipulating that data, how do we make sure the internal data start
out in a valid state? We write one or more constructors that tell
the object how to initialize!
A Java constructor always has the following attributes:
JTextField jtfLastName = new JTextField(<parameters>);
JTextField jtfLastName = new JTextField(<parameters>);
If you do not provide a constructor for a particular class, a default
constructor is provided that:
A Java class may inherit from only one other class (this is called single
inheritance), and all classes in Java ultimately inherit from a common
general (base) class, java.lang.Object. By relating all classes
in Java to one common ancestor, the Object class provides application-wide
utility functions that are available to all Java objects. We already discussed
the equals method in our section on reference
semantics, but Object also contains methods like toString, clone,
and others.
When a class like HomePolicy inherits functionality from a Policy class,
it may be necessary to modify the inherited behavior. This modification
is facilitated by overriding the inherited functionality-- writing
a new implementation for the method, with exactly the same parameter list
and return type. Overriding a method provides enables polymorphism, which
is discussed in the next section.
There are times when you know how to override a method, but you have
more difficulty with the general implementation than the specific. For
example, the getPremium method of a HomePolicy might be expressed in terms
of house location, building materials, flood zones, dead-bolt locks, etc.
But if HomePolicy inherits the method from the Policy class, how would
you implement getPremium in the Policy class?
This problem provides two original options you might choose think of:
Any class that has at least one abstract method must
be declared abstract as well. Furthermore, the Java compiler
does not let you use the new operator on any abstract
class. This implies that to create an instance of a BoatPolicy, the developer
must
implement the getPremium method.
While the term abstract implies that you have to create
a subclass before creating an instance, the term final
implies that a class may not be subclassed. The final
keyword is used throughout the JDK in places where Sun does not intend
you to create new subclasses that might override expected or required functionality.
For example, the String class is a final class: you cannot extend the class
to override any of its methods.
Just as a class can be marked final, individual methods
may also be marked final. In that case, the class can be
extended, but its final methods may not be overridden.
public void getPortfolioTotalCost(Policy[] portfolio) {
double total = 0;
for (int i = 0; i < portfolio.length; i++)
total += portfolio[i].getPremium();
return total;
}
public void getPortfolioTotalCost(Policy[] portfolio) {
double total = 0;
for (int i = 0; i < portfolio.length; i++)
total += portfolio[i].getPremium();
return total;
}
Now in real-life, if you have a collection of insurance policies, you
might refer to them general as a portfolio. But if you thumb through each
one, totaling their premium, you expect that looking at each policy, you
will get its exact premium, and not some general default. Polymorphism
allows us to simulate the same effect in software.
You see every object reference in Java has two types: its perceived
type at compile-time, and its actual type at runtime. Our function would
not compile the call to getPremium if all policies did not have this method.
However, the actual behavior of the method call is not determined until
this code actually executes with some real portfolio of specific policy
types. By our own definition, we have written code that deals with a general
class (Policy) and used polymorphism to get specific behavior out of each
specific policy type.
Since the compiler prevents us from accessing specific methods and data
that are not defined at the general level, is there any way to access that
information after placing a specific object into a generic reference? In
Java, we can always test the runtime type of an object using the instanceof
operator. Then you can typecast to extract your specific information as
follows:
if (portfolio[i] instanceof HomePolicy) {
HomePolicy homePolicy = (HomePolicy)portfolios[i];
riskManager.assessRisk(homePolicy.getOutstandingLiens());
}
if (portfolio[i] instanceof HomePolicy) {
HomePolicy homePolicy = (HomePolicy)portfolios[i];
riskManager.assessRisk(homePolicy.getOutstandingLiens());
}
Java defines a type, similar to a class, called an interface. This interface
specifies a collection of function signatures, which are implicitly public
and abstract. (Optionally, any variables declared in an
interface are implicitly public, static
and final.) A class that implements the interface must
provide code for each of the functions it declares, or the class is also
abstract and cannot be instantiated.
Using interfaces allows you to define new types in Java, whose instances
do not have to be related through inheritance. In essence, any object that
implements a given interface is substitutable for any other object
that implements the same interface. This gives us the benefits of polymorphism,
without tying us into a rigid hierarchical inheritance structure.
The Java language allows a class to inherit its implementation from
only one other class, although a class can implement as many interfaces
as necessary. In this sense, Java support single implementation
inheritance, but multiple interface inheritance.
class MyClass {
public static int i;
}
class MyClass {
public static int i;
}
Similarly, a static function also belongs to its class,
and can be referenced from any instance or from the class itself. A function
declared static cannot reference the implicit this
variable or any instance variables, since it may have been invoked on the
class. These types of functions are useful to implement certain important
object oriented design patterns, to bootstrap an application, or to write
the Java equivalent of library (utility) functions.
Strings are immutable in Java, meaning that you cannot change their
value after creation. In the following example...
String s1 = "Hello";
String s2 = "World";
s1 += s2;
String s1 = "Hello";
String s2 = "World";
s1 += s2;
This is especially significant when you wish to compare two strings.
Keep in mind that String references are object references, not primitive
types. Using the == operator checks whether two references refer
to the same String object, while the equals method compares
whether two strings have equals values. Strings also provide several useful
utility method, including
equalsIgnoreCase, which compares the strings
in a case insensitive way.
An inner class is simply a class defined within another class. The benefits
of inner classes may be elusive at first, especially when you consider
that some inner classes are not visible outside a localized scope. This
usually implies that inner classes are not as reusable, which leads many
developers away from using them.
Inner classes perform a vital role in solid Java development, however.
Inner classes preserve encapsulation by providing the class access to the
private variables of its containing class. There are some who feel that
this violates encapsulation, since another class has access to internal
state, but remember that the entire inner class is also part of the internals
of the object!
Your alternative using standard classes is to pass Object references
to these external objects, through which callbacks provide access to state,
either as non-private instance data, or through mutator functions that
must be made available not only to the external class, but also must be
available to the entire package. Inner classes let you keep your data,
and even many of your functions, private from the perspective of the outside
world.
Of the inner class types, you can define simple inner classes within
other classes. You can also define local classes, which are classes
defined in the scope of a particular method, and have the added advantage
of accessing local data and parameters declared final. Lastly, you can
define anonymous inner classes, which are local classes that are
declared and instantiated on-the-fly, without assigning a class name. These
classes are most often used for event handler adapters in GUI development.
DataExpress is the database architecture for quickly building database
applications in JBuilder. As a thin layer over JDBC, DataExpress provides
facilities for filtering, sorting and searching through resultsets that
may be cached locally in memory, persisted on disk, or distributed as a
serializable Java object. A class hierarchy of the main DataExpress classes
is shown in the graphic to the right. The following key points address
questions asked in the Study Guide about DataExpress:
After successful completion of the exam, you will be Borland Product
Certified. You will receive an official plaque and be granted the use of
a logo as evidence of your qualifications to show to employers and customers.
Your certification positions you to meet increasing customer demands for
qualified service providers and sets you apart from other developers in
your field. Certification also verifies the specific knowledge and skill
you brings to a job, and the credential becomes increasingly important
as skill demands for IT professionals change rapidly.
There are also benefits to your organization, which can effectively
distinguish who has the skills needed for particular technical jobs and
more efficiently recruit, train, and deploy technical staff.
In addition to passing the Borland Product Certification exam for JBuilder,
it is the responsibility of prospective certified instructors to be sure
they meet the following prerequisites:
It is recommended, though not required, that an instructor attend the
applicable product Foundations class before attending a Train-the-Trainer
class. By taking the class at least once, it is hoped that prospective
instructors learn more about the scope and flow of the class, as well as
trouble areas and points of confusion for real students.
Finally, all certified instructors must sign a contractual agreement
with Borland which details the responsibilities of both parties. Their
employer must also complete and Authorized Education Center application
before open enrollment classes may begin.
JBuilder certification, especially as an instructor, is a privilege
and an honor, rewarding an individual time-and-again with esteem from peers
and pupils alike. The Borland certified community is a high-profile assembly
of developers and technology evangelists, a close network of business professionals
who interact to increase visibility, knowledge and opportunity for their
companies. Some travel around the world teaching and mentoring, others
have authored popular technical resources, and still more are executives
and founders of startup and mid-size training and consulting organizations.
To close, let me just add that I hope today is the day that you join
our ranks. Speaking for the certified development community, I believe
that I can safely say, we would love to add another to our ranks and to
take the opportunity to leverage your knowledge and experiences as our
community grows in support of Borland development tools and platforms.
(back to intro)
Paper originally presented at the 11th Annual Borland Conference, July 2000.
Server Response from: SC3
|
http://edn.embarcadero.com/article/26907
|
crawl-002
|
en
|
refinedweb
|
By: Lino Tadros
Abstract: This article will explain the preferred method of installing .NET components into the C#Builder IDE using the Open Tools API.
Because of a lack of documentation, we decided to open the assembly “Borland.Studio.ToolsAPI”
ourselves and snoop around.
We used Reflector, available from
As you can see the methods of the Interface seemed very interesting, especially
the “Add” method which takes a ToolBoxItem and a Category.
We also wanted to use the IOTASplashScreenService and IOTAAboutBoxService as
described in Erik
Berry's excellent article on the BDN.
If you read Erik’s article already you know by now that you need to implement
a static void function called “IDERegister” that will be called
by the C#Builder IDE upon starting if a correct path of your assembly is found
in the registry under KnownIDEAssemblies.
Great! So we decided to implement the IOTAComponentInstalService inside of
the IDERegister as well, besides the other 2 interfaces for the Splash Screen
and About Box.
This, by the way, proved to be the wrong route, because we needed the Splash
Screen and About box code to be called every time the IDE starts but that is not necessary
for installing the components onto the ToolBox.
So after talking to Allen Bauer and Corbin Dunn on the IDE architecture team
at Borland and getting some feedback from Ray Konopka, (King of Component Design),
Sean Winstead, ComponentScience’s Senior Architect, decided to create
2 design time packages. One for the SplashScreen and AboutBox that always gets
called from inside of the IDERegister, in the main assembly for the components,
and another design time assembly that runs only once after installation so that
the C#Builder IDE creates a category and installs all components from Elements(Ex)
into it.
It was the best way to accomplish the task knowing that we will need to integrate
into Visual Studio using the VSIP and also Delphi 8 when it becomes available.
It is cleaner and more maintainable that way.
Ok, now let’s tackle the challenges we had to overcome for installing
these components onto the ToolBox:
First, the “Add” method of the IOTAComponentInstallService had
a parameter of type ToolBoxItem which needed to be passed in for each component
we wanted to “Add” defining the ToolBoxItem object.
Well, this is where “reflection” comes in handy.
While writing components in .NET you will find a special attribute specifying
whether the component is supposed to be a ToolBoxItem component.
[ToolboxItem(true),
ToolboxBitmap(typeof(Bitmaps.Utilities), "CsLabel.bmp")]
public class CSLabel : ExBaseControl
{
}
The first attribute for the component above specifies that it is a ToolBoxItem
component.
So if we mark all our components with this attribute, we can easily load the
assembly when C#Builder starts and use reflection to request all components
with that attribute set to true and start creating an array holding all references
to these ToolBoxItems, like in the following code:
private ArrayList GenerateToolboxItems()
{
// Generate one item per component in each product assembly.
ArrayList items = new ArrayList();
// _assemblies is a member of the class containing all the names of the
// assemblies we need to check for components.
foreach (Assembly assembly in _assemblies) {
// Use reflection to get the controls and components in this assembly
// having a ToolboxItem attribute.
Type[] types = assembly.GetExportedTypes();
foreach (Type type in types)
{
object[] attribs = type.GetCustomAttributes(typeof(ToolboxItemAttribute), false);
if (attribs.Length > 0)
{
ToolboxItemAttribute toolboxItemAttrib = (ToolboxItemAttribute) attribs[0];
if (toolboxItemAttrib.ToolboxItemType != null)
{
ToolboxItem item = new ToolboxItem();
item.AssemblyName = type.Assembly.GetName();
item.DisplayName = type.Name;
item.TypeName = type.FullName;
object[] bitmapAttribs = type.GetCustomAttributes(typeof(ToolboxBitmapAttribute), false);
if (bitmapAttribs.Length == 1) {
ToolboxBitmapAttribute bitmapAttrib = (ToolboxBitmapAttribute) bitmapAttribs[0];
item.Bitmap = (Bitmap) bitmapAttrib.GetImage(null);
}
items.Add(item);
}
}
}
}
return items;
}
So as you can see from the previous code, the return of the GenerateToolBoxItems
function is an ArrayList of items that contain the assembly name, display name,
type name and bitmap of each component.
Now that we have that ArrayList, we are much closer to calling the Add method
on the IOTAComponentInstallService for each item in the array.
foreach (ToolboxItem item in GenerateToolBoxItems())
componentInstallService.Add(item, Elements(Ex));
Pretty cool! Huh?
Ok, so we did that! Still 2 problems:
First, how can we limit the execution of the code for the integration into the
toolbox to once instead of having to fire this code everytime the IDE starts?
What if the user changes the name of the category in C#Builder after the first
run, he or she will end up having 2 categories of the same components? Not cool!
Thanks to Corbin Dunn, I was told to make the value of the entry to the design
time assembly under HK_CurrentUserSoftwareBorlandBDS1.0Known IDE Assemblies
equal to “RunOnce”, then the IDE will delete the entry from the
registry after the first run. It worked perfect! Thanks Corbin!
The final problem was what happened when we added the registry entry to C#Builder,
started C#Builder and found the Elements(Ex) category with all of our components
installed (it was a happy moment ?) but when we shut down the IDE and brought
it back up again, no more Elements(Ex) category or components ? (Not a happy moment)
This is when Corbin helped one more time to let us know that calling “SaveState”
on the IOTAComponentInstallService causes the IDE to stream all of its ToolBox
to an XML file under C:Documents and Settings<UserName>Application
DataBorlandBDS1.0ApplicationSettings.xml and the IDE calls “LoadState”
every time it loads to loads the ToolBox state from that file.
// Tell the component install service to save its state so that
// our changes are persistent.
componentInstallService.SaveState();
Hope this information will save you some time in the future implementing similar
integration routine..
Till next article,3
|
http://edn.embarcadero.com/article/30303
|
crawl-002
|
en
|
refinedweb
|
Dont be a borg, harvest ideas and let your creativity shine.
It is now official from a post by great Scott Guthrie that jQuery is bundled with Asp.net MVC Beta..
To mark the marriage of jQuery, I have released a new version of FlickrXplorer that uses nothing but jQuery on the client. More info on the release can be found at the following URL.
Ajax.Form gives a nice way of adding ajax features with no more tears. Just add a using block with necessary html controls and a submit button, everything else is taken care of on behalf. Being inspired by it, in FlickrXplorer project I have created a Html.JForm that works in a similar way using jQuery library.
To see how it works, it is to mention that Ajax.Form basically creates a html form with onsubmit hook where it injects few JavaScript from Microsoft MVC Ajax Library. To replicate, le's say to do an image list paging with Html.JForm that gets the data, shows the loader and updates the container
I first added the MVC JavaScript (Ex. Inside default.master) file with ajax stuffs that works with Html.JForm and referenced the jQuery library.
<script type="text/javascript" src="<%= Page.ResolveClientUrl("~/Content/jquery-1.2.6.min.js") %>" ></script>
<script src="<%= Page.ResolveClientUrl("~/Content/mvc-jquery.js") %>" type="text/javascript"></script>
Finally, In the actual ascx/aspx file, I wrote the following
<%
using (Html.JForm(VirtualPathUtility.AppendTrailingSlash(HttpContext.Current.Request.Path), "POST", new JOptions
{
TargetPanelId = "imgListContainer",
WaitPanelId = "imgListWait"
}))
{ %>
...
...
<%
} %>
That's it , also it is to include that this will work with controller actions that return either ContentResult or ActionResult.
Basically, the signature of Html.JForm looks like
Overload 1 : Html.JForm(actionurl, methodType, JOptions);
Html.JForm("/controller/action", "GET/POST",
new JOptions
{
TargetPanelId = "update containter"
WaitPanelId = "intermidiate visible panel"
});
Overload 2 : Html.JForm(actionurl, methodType, JOptions, htmlArrributes);
Html.JForm("/controller/action", "GET/POST",
new JOptions
{
TargetPanelId = "update containter"
WaitPanelId = "intermidiate visible panel"
}, new { name = "myForm" );
Going deeper, these are actually HtmlHelper extenstion methods
public static IDisposable JForm(this HtmlHelper helper, string action, string method, JOptions options, object htmlAttribtues)
{
return new JQueryForm(helper, action, method, options, htmlAttribtues);
}
Behind the scene they call a IDisposable class callled JQueryForm that generates the form with actual hookup scripts. The concept is to generate the starting form tag with all the attributes provided during initialization and the ending form tag on dispose call. This basically is done in the Ajax.From that can be found with a little help from reflector.net (or source from codeplex). Now, inside System.Web.Mvc there is a new public class called TagBuilder, which I found really handy for building up html scripts.
Therefore, here is what I have done during initialization of JQueryForm. I have added only code that generates the tag, other things you can find it by yourself in the code provided at the end.
TagBuilder builder = new TagBuilder("form");
builder.MergeAttribute("action", url);
builder.MergeAttribute("method", method);
builder.MergeAttributes<string, object>(new RouteValueDictionary(htmlAttributes));
if (options.CallBack == null)
{
builder.MergeAttribute("onsubmit",
string.Format(jStringOverload, options.WaitPanelId, options.TargetPanelId));
}
else
{
builder.MergeAttribute("onsubmit",
string.Format(jString, options.WaitPanelId, options.TargetPanelId, options.CallBack));
}
responseBase = helper.ViewContext.HttpContext.Response;
responseBase.Write(builder.ToString(TagRenderMode.StartTag));
To add attributes MergeAttribute is used that has few overloads and appends the attribute to the generating tag. Finally, to build the string you need to use the ToString overload with proper render mode.I have used TagRenderMode.StartTag which will generate the opening html form tag.Basically, JQueryForm has only a Constructor where I build starting tag and a Dispose method where I just have to close the tag.
responseBase.Write("</form>");
As you can see that I have hooked one method in onsubmit call, the purpose of this method is to prevent default form post, do an ajax callback and get the result to a html container. I named it jAjaxSubmit.
Before jumping to the analysis of the method, let's see how to do ajax calls using jQuery. It is very clean and simple. Therefore, really cool.
$("#" + waitElementId).show();
$.ajax({
type: actionType,
dataType: "html",
url: url,
data: params,
success: function(result) {
$("#" + elementId).html(result);
$("#" + waitElementId).hide();
if (typeof callback != 'undefined')
callback();
},
error: function(error) {
$("#" + waitElementId).hide();
//TODO:// write your log here
}
});
This is an example of Ajax callback where I can provide the type [GET|POST], dataType[html|xml|json](i have used "html"), data[serialized form params], success and a failure callback. Those who are new to jQuery, it is to mention that $(..) is equal to the $get in Microsoft Ajax and it accepts element either by id or name (# is used to specify get element by id).
During the submit button click under Html.JForm , we first need to stop the form post. For, Internet Explorer we can do this by sending a return false but for Mozilla based browsers the proper way to do is the stopPropagation() call.
if (!$.browser.msie) {
e.stopPropagation();
}
Inside the using of Html.JForm as we specify Html elements either by Html extension methods (<%= Html.Textbox("comment") %>) or by hand. we need to get the values of them and pass them as "&" separated way during the submit process. using JQuery, we can easily do that using $(form).serialize() and pass it in the callback. So the final script looks like
function jAjaxSubmit(form, e, waitPanelId, targetToUpdate, methodName) {
if (!$.browser.msie) {
e.stopPropagation();
}
var isValid = true;
if (typeof methodName != 'undefined') {
isValid = methodName(form, null);
}
if (isValid) {
// create the form body
var body = $(form).serialize();
renderContent2(targetToUpdate, waitPanelId, form.action, body, form.method);
}
return false;
}
renderContent2 is just a wrap around of the callback script shown earlier. In running project, injected block looks like the following firebug snap.
Apart from the internals , all things are done automatically by Html.JForm call, this can be found running in FlickrXplorer project but for your convenience, I have added a sample project using the default MVC template which you can get here.
Of course, you can try browsing the live app at (This gives you a nice way to fast explore millions cool public photos from flickr).
Have Fun!!!
Updated with Asp.net MVC Beta on Oct 19, 2008
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Pingback from ASP.NET MVC Archived Buzz, Page 1
Nice post...
In a post few month back , i showed how can i simulate a callback using JQuery and ASP.NET with my experimental
|
http://weblogs.asp.net/mehfuzh/archive/2008/10/13/using-jquery-to-do-ajax-form-posts-in-asp-net-mvc.aspx
|
crawl-002
|
en
|
refinedweb
|
Today, I was out Googling for a good ATOM library to add to a project I am working on so I can support both RSS & ATOM. Thus far, I have been using the excellent ASP.NET RSS Toolkit, but I was disappointed to find very few .NET solutions for ATOM available in the open-source community.
The ATOM.NET (& RSS.NET) libraries appear to nolonger be actively supported/developed. As a result, ATOM.NET still only supports ATOM v0.3. This pretty much renders it useless for general use since Blogger and many other services have shifted everything to ATOM v1.0.
I found another interesting library, Atomizer, on GotDotNet but it contained a lot of unneccessary tangential code and inexplicably adhered to the "put-all-twenty-classes-in-one-file" methodology which made sifting and filtering the chaffe a bit more annoying.
Seeing as how GotDotNet is being eliminated I thought it wise to forego projects on there anyhow, so I decided to check out its chosen successor; CodePlex. I did a quick search for "ATOM" which came up with 2 likely hits; Feed Library, and WebFeedFactory.
Feed Library said all the right.
Ufortunately that's all there is too that project - just text. There is no source code & no releases! Oh well, I'll bookmark that one and maybe check back in a few months to see where it goes.
The WebFeedFactory actually had source-code AND text:
The goal of this project is to create a reusable library for parsing both Atom and RSS feeds and provide a common interface for working with either type of feed.
The goal of this project is to create a reusable library for parsing both Atom and RSS feeds and provide a common interface for working with either type of feed.
I loved what it said, and the code looked promising, eventhough it too had not been updated in a while and lacked ATOM 1.0 support. However, the kiss of death was when I saw that it was GPL'd. Game over, bye bye now.
So, my journey continued...
I began pulling down numerous libraries packaged as articleware, blogsamplings, and even a few commercial products. Most were the usual glut of StrongTypedXmlDom implementations, or suffered from XmlSerialization disorder. Finally, I discovered Brian Kuhn's Argotic Web Content Syndication library.
Argotic appears full featured, thoughtfully composed, extensible, permissively licensed...and, most importantly, free! It even supports almost every known RSS extension including ITunes. The only downside I can see so far is that it is apparrently "closed-source" since only binaries are available on the website. I think I'll go ahead and play with his API a bit before deciding whether or not its worthwhile to persue the elusive sourcecode.
I'm not sure if its what I'm looking for, but it looks damn close.
While I continue to search for ATOMtopia, please leave a comment if you have a suggestion for a good, simple, and preferably free (opensource) library - or at least tell me what you use.
Just a quick update...
My first test of Argotic failed when trying to read atom feeds from blogger.com. However, ATOM.NET worked just fine.
Also, based upon the exception I received, it does looke like the Argotic component is using XmlSerialization, which as we all know is not very forgiving, especially when alien Xml namespaces are appended, such as with the blogger.com extensions to atom.
Maybe the old ATOM.NET library isnt a bad option afte rall....
The RSS library from Racoom.net is pretty nice - it's only RSS and OPML though ...
many of the librarys are forgetting delta encoding with etag.
You can check generated rss on my site
The Orcas release of .Net Framework will have nice Rss and Atom support, the first CTP is out, see
as a starting point.
Once I get a few more bugs worked out, I plan on providing the source code for Argotic (I didn't think people really would want the source, I tend to just want an assembly to reference that gets the job done).
CodeSniper: I guess I need to look at blogger.com ATOM feeds and get a fix for Argotic. Feel free to post on the bug reporting forum on this issue (specifc feed URL's always help). I have noticed that some people report issues that end being ATOM 0.3 feeds instead of ATOM 1.0 feeds, but I will get this fixed if there is an issue.
preishuber: Argotic fully supports conditional get of feeds using eTag/LastModified headers. Check out the Refresh() method on ATOM/RSS feed classes and/or see the online examples of conditional get.
Hadn't heard about the Orcas syndication feed support, will have to check it out.
Lance,
The next release of Argotic will have fail-over support to parse a feed via a reader if the XML serialization fails. Thanks for the feedback.
Brian: Very cool of you to open source! I love Codeplex and all the good stuff you can find there.
Lance: In some scenarios you probably could use Yahoo Pipes to prepare stuff for you as well.
A further update to the RSS.NET and ATOM.NET libraries... They are very much alive and under extensive redevelopment. We have purchased these products and a complete update with many, many new features are included. The final libraries have not been released yet (within 2 weeks), but interested developers can obtain a beta by contacting us via the website.
Dale,
So, they are going commercial?
I would hate to see the open-source community lose yet another tool to commercialization. I understand the need for monitization, but it often feels like a bait-and-switch when I adopt, contributed-to, and support an open-source initiative only to have it go closed-source or commercial once it matures.
Yes, it is a fine line between open source code and commercialization - but it is a situation of losing an outdated tool while gaining an updated product. The RSS and ATOM libraries have been purchased and used as the base code for a complete redevelopment. There will still be basic, "free" versions for the individual developer but an "enhanced" version for those willing to purchase additional features. We are great supporters of open-source, but at some point there has to be a financial justification for the team of developers that have been working on these projects for over a year.
Will you be removing the versions of RSS.NET and ATOM.NET that are currently available on SourceForge.com or change their license in any way?
I dont have a problem with someone taking a base set of code, rewriting it, and using it as a basis for a new product. If that didnt happen, many open-source projects would not be sustainable.
However, what I do have a problem with is when you also remove, close, or reduce the licensing on the original open-source codebase that made that very product possible.
Even though the old source is nolonger as relevant as it once was, it is still useful to those who have existing implementations.
A new CTP that runs on .Net Framework 3.0 is out with RSS & Atom support.
See these links for starters:
Im not sure if this is the right place to post this but heck, here it is. :) Im try to use Argotic and im having problem adding a refence to its dlls in Visual Studio.
Here's the error message: "A reference to 'C:\Inetpub\wwwroot\test\bin\Argotic.Core.dll' could not be added. This is not a valid assembly or COM component. Only assemblies with extension 'dll' and COM components can be referenced. Please make sure that the file is accessible, and that it is a valid assembly or COM component."
I first thought is was just a matter of permission problem. I already tried to allow different accounts and even Everyone but to no avail.
By the way, im using Framework 1.1 if that would help.
Argotic is a .NET 2.0 library, which could explain that error.
We just released version 2.0 of the ASPNet RSS Toolkit and have added Atom, RDF and OMPL support. Also, it now supports any extension that is properly namespaced... enjoy.
|
http://weblogs.asp.net/lhunt/archive/2007/03/29/rss-and-atom-libraries.aspx
|
crawl-002
|
en
|
refinedweb
|
Produce the highest quality screenshots with the least amount of effort! Use Window Clippings.
The original draft of my latest security article for MSDN Magazine, App Lockdown, included sections covering web security. They were ultimately removed as they overlapped slightly with other articles already featured by MSDN Magazine. They are however part of the story I wanted to tell and so I include those sections here.
If you enjoyed my article, I encourage you to read these sections on web security to complete the story.
Protecting Web Clients
Classic web technologies have made it very easy to build insecure web-based systems. Writing secure web services and clients takes some works because the web was founded on the idea of openness rather than privacy and security. The original web authentication protocols, such as Basic authentication, are primitive compared to the network authentications protocols used by Windows. In this section I will focus on the security model of web clients and what you need to do to secure your web service clients.
Unlike the previous types of clients we have talked about, web clients really do not have much say over the quality of authentication and privacy. The client needs to be satisfied with the authentication scheme provided by the server as well as the level of privacy offered for protecting data. With native Windows authentication the client is typically in charge of negotiating a satisfactory form of authentication, data integrity and privacy.
Here is an example of a simple anonymous web request using the .NET Framework:
Uri uri = new Uri("");HttpWebRequest request = (HttpWebRequest) WebRequest.Create(uri); using (WebResponse response = request.GetResponse()){ // TODO: read response}
Assuming the server supports anonymous access, the request should succeed. The simplest way to provide a set of credentials to authenticate the client is to create a NetworkCredential object and assign it to HttpWebRequest’s Credentials property. Using this approach, however, is not very safe since you do not know how those credentials will be used. If the server requested Basic authentication then the credentials will be sent over the wire which is clearly not safe. A better approach is to use the CredentialCache class to have a say in negotiating the authentication protocol. Here is a more interesting example:
Uri uri = new Uri("");HttpWebRequest request = (HttpWebRequest) WebRequest.Create(uri); NetworkCredential credentials = new NetworkCredential("principal@authority", "password"); CredentialCache cache = new CredentialCache(); cache.Add(new Uri(""), "Basic", credentials); cache.Add(new Uri(""), "Negotiate", credentials); request.Credentials = cache; using (WebResponse response = request.GetResponse()){ // TODO: read response}
In this example the web request will use Basic authentication only if SSL is also used, otherwise it will use Windows authentication. The credentials and authentication scheme to use is determined by finding the closest match to the URI prefix in the cache. If the actual URI begins with the https scheme it indicates that SSL will be used. If you simply want to use the client’s current security context then set the HttpWebRequest’s Credentials property to CredentialCache.DefaultCredentials. Assuming the server supports Windows authentication, a network logon session will be created for you on the server. Keep in mind that Basic authentication can be risky even if you employ SSL. If the server is somehow compromised, a bag guy can easily get the client’s cleartext credentials and do unspeakable things masquerading as the client. Windows authentication provides a better solution by proving the client’s identity without ever sending the credentials over the wire.
Despite the benefits of Windows authentication over HTTP, it is not a great way to make friends on the Internet. Few web clients support Windows authentication and few web servers allow it. Even if they do, you still have to deal will the multitude of firewalls that are specifically designed to block anything but simple HTTP requests over well-known ports. One of the benefits of using SSL is that it can provide client authentication. Although not practical for large-scale web applications like amazon.com where SSL is only used for server authentication and privacy, SSL client authentication provides a very portable and secure form of authentication for web clients. Authentication is achieved through the use of client certificates. In a typical SSL handshake, the server proves its identity by presenting the client with its certificate. To authenticate the client with a client certificate, the client simply needs to provide the server with its certificate. This is clearly an oversimplification, but you get the point.
To use a certificate, the client needs to be issued a certificate by a certificate authority. Since the certificate has an associated private key, it needs to be kept safe. This is handled by a certificate store which you can access using the Cryptography API or though Internet Explorer. Here is an example of using a client certificate:
Uri uri = new Uri("");X509Certificate certificate = X509Certificate.CreateFromCertFile(@"C:\client.cer"); HttpWebRequest request = (HttpWebRequest) WebRequest.Create(uri);request.ClientCertificates.Add(certificate); using (WebResponse response = request.GetResponse()){ // TODO: read response}
As you can see, using SSL and client certificates for authentication is quite simple as long as you have the infrastructure set up to support it. Due to the hostile nature of the Internet, it is very likely that a bag guy will attempt to redirect the client to a different server under his control. Therefore proving the identity of the server is important. Server authentication is also managed by a certificate when using SSL. The server presents the client with a certificate that the client can then use to validate the server’s identity. Of course for this to make any sense, a central authority is required. In Windows networking this is achieved by using Kerberos and a Windows domain controller. The web equivalent is through the use of certificate authorities and a chain of trust. The client needs to trust a certificate authority that directly or indirectly issued the web server with its certificate. In this way the client can validate the integrity of the server’s certificate. This is largely an administrative task to configure servers and clients with mutually acceptable certificate authorities. But sometimes all you want is to use SSL for either client authentication or privacy and you do not care about server authentication. This may be acceptable for connecting to a web server on a trusted network or simply in development and testing of your application to avoid the administrative overhead of certificate management. The challenge is that the default behavior for web clients is to validate the server certificate. To provide custom certificate validation a web client can provide an implementation of the ICertificatePolicy interface and assign it to the static ServicePointManager.CertificatePolicy property. Future web requests that use SSL will then call the ICertificatePolicy.CheckValidationResult method to determine whether or not the certificate should be honored. Here is a simple example that will accept any server certificate:
class CustomCertificatePolicy : ICertificatePolicy{ public bool CheckValidationResult(ServicePoint servicePoint, X509Certificate certificate, WebRequest request, int errorCode) { return true; }}
For more control over validation you can query the certificate as well as check the error code to determine the reason for validation failure. The error codes that you can expect are listed in the documentation for the CERT_CHAIN_POLICY_STATUS structure in the Platform SDK.
So far we have only discussed web clients in the context of simple HTTP requests. Web programming has come a long way since the days of simple HTTP GET request and response messages. As a web client developer you need to understand what is involved in securing you clients in the new programmable web. The following is a classic web service client proxy using the .NET Framework:
[WebServiceBinding(Name="SampleServiceSoap", Namespace=SampleService.WebServiceNamespace)]class SampleService : SoapHttpClientProtocol { public const string WebServiceNamespace = ""; [SoapDocumentMethod(WebServiceNamespace + "EchoUserName")] public string EchoUserName() { object[] results = Invoke("EchoUserName", new object[0]); return (string) results[0]; }}
To connect to a Web service using this proxy class, simply create an instance of the SampleService class, set the Url property to the address of the web service endpoint and call the EchoUserName method. This method will block until the SOAP response message is received and return the string contained in the message. Authentication works in the same way as I described for simple HTTP requests. You can use a CredentialCache object to provide a set of credentials to use and set the Credentials property of the SampleService class. Internally the credentials will be passed to the underlying HttpWebRequest object that will actually make the SOAP request over HTTP. That is Web service client programming with the .NET Framework in a nutshell. To take advantage of some of the more modern web security standards you need to turn to Web Services Enhancements 2.0 (WSE) which is an extension to the .NET Framework that provides an implementation of WS-Security. To start using WSE in your client application you need to add a reference to Microsoft.Web.Services2 to your assembly. To take advantage of it for the SampleService class described above, simply change the base class from SoapHttpClientProtocol to the WebServicesClientProtocol class from the Microsoft.Web.Services2 namespace. That’s it! You’ve just written a WSE client. It may not look like anything has changed and indeed the SOAP envelope body will be exactly the same, but if you happen to have a SOAP trace going, you will have noticed that there is now a SOAP envelope header with all kinds of interesting information. If the Web service you are connecting to knows nothing about WSE, it will just be ignored unless the headers are attributed with mustUnderstand=”1”.
The client can provide credentials in the form of security tokens in the Security SOAP header. Here is an example using a WS-Security UsernameToken:
SampleService service = new SampleService();service.Url = ""; UsernameToken token = new UsernameToken("principal@authority", "password", PasswordOption.SendPlainText);
Security securityHeader = service.RequestSoapContext.Security;securityHeader.Tokens.Add(token); string userName = service.EchoUserName();
Other security token types are available to support certificate authentication as well as Windows authentication using Kerberos security tokens. The UsernameToken is similar to Basic authentication in HTTP. The credentials are sent across the wire in the clear and it even results in the same type of logon session on the server namely a network logon session with network credentials. While experimenting or developing with Web services I often find it useful to use an HTTP trace tool such as the MSSoapT tool from the Microsoft SOAP Toolkit to capture the request and response messages. To do this you need to direct your client code to a different port that the trace tool is listening on. The trace tool will forward the request on to the final destination after capturing the SOAP envelope. This can be a problem for WSE since it has addressing and security capabilities that validate the destination address to ensure that it matches up to what was expected. Of course since WSE supports addressing, it is possible to indicate what the final destination of the message is so that routing is possible. To enable this, simply create a Uri object with the actual address of the Web service and set the Destination property of the AddressingHeaders object for the request:
UriBuilder uri = new UriBuilder("");service.Url = uri.ToString(); uri.Port = 80;AddressingHeaders addressingHeaders = service.RequestSoapContext.Addressing;addressingHeaders.Destination = new EndpointReference(uri.Uri);
Web Services Enhancements 2.0 provides a wealth of powerful functionality to provide fine grained control over the security aspects of your web service programming from authentication to integrity and privacy. For in-depth information on WSE, visit the Web Services Developer Center on MSDN.
Defending Web Servers
There are a number of different options for authenticating web clients. Internet Information Services (IIS) makes it relatively easy to use Windows user accounts to authenticate web clients. Certificates can also be used to authenticate clients when SSL is in use and this is just what IIS provides. Applications can elect to enable anonymous access in IIS and implement their own authentication scheme. One great example of this is ASP.NET Forms authentication. Of course this is only suitable for a web application that doubles as the presentation layer. For Web services you can employ WS-Security to provide more fine-grained control over security capabilities.
To begin to understand how web server security works, it helps to have an understanding of the different security contexts presents at any given time. In Protecting COM Clients I mentioned the notion of an effective token or security context. The effective token is the thread token if one exists, otherwise it is the process token. The .NET Framework exposes the concept of an effective token through the GetCurrent static method of the WindowsIdentity class. The resulting WindowsIdentity object wraps the effective token and provides an elegant interface for querying it. Calling the WindowsIdentity.GetCurrent method from within a web method can tell you a lot about how IIS and ASP.NET manage security contexts. Consider the following simple web method:
[WebMethod]public string EchoUserName(){ WindowsIdentity identity = WindowsIdentity.GetCurrent(); return identity.Name;}
What will this return? Well it depends on many things. Let us consider a few options. If the impersonate attribute (system.web/identity/@impersonate) in the web application’s web.config file is set to false it will return the name of the process identity, which defaults to the Network Service account. If the impersonate attribute is set to true and the web client is connecting anonymously it will return the name of the account representing anonymous clients, which is typically SERVER\IUSR_SERVER where SERVER is the host name of the web server. If the web application does not accept anonymous connections it will return the name of the account represented by whatever logon session was established for the client. Getting a feel for how ASP.NET, IIS and Windows combine to provide security for you web applications will take you a long way in understanding web server security in general.
Using Windows user accounts for managing authentication and authorization is extremely convenient and can save a lot of work in development and maintenance. But using Windows user accounts does not always make sense. There are two parts to the problem. The first is that there is a need for a more universally acceptable form of identification. A popular answer to this is X.509 certificates. The second problem is that the mechanics of authentication at the web server and HTTP levels are just not flexible enough to meet the needs of Web service-based applications.
Due to the universal support for SSL in web clients and servers, using SSL to provide data integrity, privacy and authentication is quite simple. The biggest obstacle is issuing certificates to servers and clients that can then be trusted by all through some direct or indirect certificate authority. If the client sent a certificate along with its request, you can retrieve information about the certificate using the HttpClientCertificate object returned by the HttpRequest.ClientCertificate property.
In Protecting Web Clients, I briefly discussed how you can use Web Services Enhancements 2.0 (WSE) to provide WS-Security capabilities to your web clients. WSE also provides an ASP.NET SOAP extension to provide support for processing WS-Security headers. Once the SOAP extension is added to your web application’s web.config file, you can interrogate the Security SOAP header for a request using the Security object returned by the Security property of the RequestSoapContext object. A typical usage would be to enumerate the security tokens in the Security header. Among others there are tokens available for Kerberos tickets as well as X.509 certificates so integrating with your existing Kerberos domain or public key infrastructure (PKI) should be straightforward.
WSE also provides support for signing and sealing all or parts of the SOAP envelope. For in-depth information on WSE as well as WS-Security, visit the Web Services Developer Center on MSDN.
© 2004 Kenny Kerr
|
http://weblogs.asp.net/kennykerr/archive/2004/09/25/234250.aspx
|
crawl-002
|
en
|
refinedweb
|
11-25-2015 08:32 AM
I would like to implement my own editing function for GMSC using java plugin. I want to see this function in the editing pane with default functions like move, split line etc.
Which classes/methods should I implement/use? I have already tried to create action class like that
public class MyEditAction extends EditAction
implements Runnable, MeasurementListener
{...}
then plugin with Action like that
@Action(
conditions=@Condition(actionCondition=EditActionConditionFactory.class,parameters={@Parameter(name="minimumSelection",defaultValue="1",description=""),@Parameter(name="geometryTypes",defaultValue="EditPolyline",description="")}))
public void GE_MYEDITFN(com.intergraph.tools.utils.disptach.RPAction a)
{
EditAction action = new MyEditAction(this.editPlugin);
action.run();
}
and I tried to programatically starts editing with EditSettings settings.setActions(ActionConfiguration.parseMapActions("GE_MYEDITFN"));
Editing starts but without my editing function - I got 16:51:24 WARNING: GE_INTERPOLATELINE isn't a known action!
I have two questions:
1) Is this the correct way to implement own editing function and if it is, what else should I do to make it work?
2) If the answer to 1) is NO, where should I start (which classes/methods should I implement) to create custom editing function? (Just need to know where to start)
Thanks
Tereza
11-27-2015 01:54 AM
Hi Tereza,
I never tried to add a custom EditAction by using a GMSC plugin but from my current point of view it should work.
Your code itself looks good, but you missed one special thing. GMSC has a so-called ActionDispatcher is able to call methods which are marked by the com.intergraph.tools.utils.disptach.annotations.Action annotation.
The ActionDispatcher is powerful but this class isn't able to find @Action marked methods without your help, because you have to tell the ActionDispatcher which classes contain such marked methods.
The GMSC framework contains two AbstractActionDispatcher implementations:
The EditPlugin itself uses a MiniDispatcher and therefore you have to do the following:
The following code sould do that for you.
ApplicationContext.getPluginCurator().getByType(EditPlugin.class).ifPresent(p -> { p.getDataModelHandler().getMiniDispatcher().registerAllActions(MyEditAction.class); });
Best Regards,
Steve
11-30-2015 03:31 AM
Hi Steve,
thanks for the answer. I tried to apply proposed solution, but it gave me the error:
30.11.2015 12:29:10 SEVERE: Can't start edit! -->
java.lang.IllegalStateException: state is NOTINITIALIZED but a state > PREPARING was expected!
at com.intergraph.web.plugin.edit.EditPlugin.getDataModelHandler(EditPlugin.java:276)
The problem is that the action has to be registered BEFORE start of editing, but EditDataModelHandler and MiniDispatcher are initialized AFTER editing started. I guess I need to register the action somewhere earlier.
Regards,
Tereza
12-04-2015 12:15 AM
Hi Tereza,
We checked the problem and unfortunately that was the only chance we had to do something like that.
Another solution would be to write your own Action which is visible in your tab, map-context or favorits.
Best regards,
Stefano
12-04-2015 01:08 AM
Hi Stefano,
thanks for the answer, I will try a different solution.
Regards
Tereza
|
https://community.hexagongeospatial.com/t5/Developer-Discussions/Java-plugin-custom-editing-function/m-p/1873/highlight/true
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
It's easy to use the ADS1115 and ADS1015 ADC with CircuitPython and the Adafruit CircuitPython ADS1x15 module. This module allows you to easily write Python code that reads the analog input values.
You can use this ADC with any CircuitPython microcontroller board or with a computer that has GPIO and Python thanks to Adafruit_Blinka, our CircuitPython-for-Python compatibility library.
First wire up the ADC to your board exactly as shown on the previous pages for Arduino using an I2C interface. Here's an example of wiring a Feather M0 to the ADS1115 with I2C:
Since there's dozens of Linux computers/boards you can use we will show wiring for Raspberry Pi. For other platforms, please visit the guide for CircuitPython on Linux to see whether your platform is supported.
Here's the Raspberry Pi wired to the ADS1015 with I2C:
Next you'll need to install the Adafruit CircuitPython ADS1x15_ads1x15
- adafruit_bus_device
You can also download the adafruit_ads1x15 folder from its releases page on Github.
Before continuing make sure your board's lib folder or root filesystem has the adafruit_ads1x15 and adafruit_bus_device files and folders copied over.
Next connect to the board's serial REPL so you are at the CircuitPython >>> prompt.-ads1x15
If your default Python is version 3 you may need to run 'pip' instead. Just make sure you aren't trying to use CircuitPython on Python 2.x, it isn't supported!
To demonstrate the usage of the ADC we will initialize it and read the ADC channel values interactively using the REPL. First run the following code to import the necessary modules and initialize the I2C bus:
import board import busio i2c = busio.I2C(board.SCL, board.SDA)
import board import busio i2c = busio.I2C(board.SCL, board.SDA)
Next, import the module for the board you are using. For the ADS1015, use:
OR, for the ADS1115, use:
Note that we are renaming each import to ADS for convenience.
The final import needed is for the ADS1x15 library's version of AnalogIn:
from adafruit_ads1x15.analog_in import AnalogIn
from adafruit_ads1x15.analog_in import AnalogIn
which provides behavior similar to the core AnalogIn library, but is specific to the ADS1x15 ADC's.
OK, now we can actually create the ADC object. For the ADS1015, use:
OR, for the ADS1115, use:
Now let's see how to get values from the board. You can use these boards in either single ended or differential mode. The usage for the two modes are slightly different, so we'll go over them separately.
For single ended mode we use AnalogIn to create the analog input channel, providing the ADC object and the pin to which the signal is attached. Here, we use pin 0:
To set up additional channels, use the same syntax but provide a different pin.
Now you can read the raw value and voltage of the channel using either the the value or voltage property.
For differential mode, you provide two pins when setting up the ADC channel. The reading will be the difference between the two. Here, we use pin 0 and 1:
You can create more channels by doing this again with different pins. However, note that not all pin combinations are possible. See the datasheets for details.
Once the channel is created, getting the readings is the same as before:
Both the ADS1015 and the ADS1115 have a Programmable Gain (PGA) that you can set to amplify the incoming signal before it reaches the ADC. The available settings and associated Full Scale (FS) voltage range are shown in Table 3 of the datasheet.
You set the gain to one of the values using the
gain property, like this:
Note that setting
gain will affect the raw ADC
value but not the
voltage (expect for variance due to noise). For example:
>>> ads.gain 1 >>> chan.value, chan.voltage (84, 0.168082) >>> ads.gain = 16 >>> ads.gain 16 >>> chan.value, chan.voltage (1335, 0.167081) >>>
>>> ads.gain 1 >>> chan.value, chan.voltage (84, 0.168082) >>> ads.gain = 16 >>> ads.gain 16 >>> chan.value, chan.voltage (1335, 0.167081) >>>
The
value changed from 84 to 1335, which is pretty close to 84 x 16 = 1344. However, the
voltage returned in both cases is still the actual input voltage of ~0.168 V.
The above examples cover the basic setup and usage using default settings. For more details, see the documentation.
|
https://learn.adafruit.com/adafruit-4-channel-adc-breakouts/python-circuitpython
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
From: Aleksey Gurtovoy (agurtovoy_at_[hidden])
Date: 2008-06-20 00:51:02
Hi Martin,
> I stumbled across something very strange related to boost::mpl::for_each and numbered vectors and sets (as opposed to the variadic
> forms). The program below contains a typelist with 21 (i.e. more than BOOST_MPL_LIMIT_SET_SIZE) classes. This typelist is
> converted to a typelist containing pointers to those classes. Finally the main program calls for_each with that typelist to print
> the names of the types. This works without problems on Linux and Windows. But now:
> If you remove the include "boost/mpl/vector/vector30.hpp" everything still compiles fine without warnings under both operating
> systems. On Linux everything continues to work, whereas under Windows nothing is printed anymore.
It's a bug in the library's diagnostics (or, rather, a lack of such) --
please see below.
> for_each does not loop through the typelist for an unknown reason. Everything works again, when reducing the number of classes to
> 20 (and adjusting the include to set20.hpp).
> From my understanding BOOST_MPL_LIMIT_SET_SIZE and its brothers and sisters should not have any impact on numbered sequences, only
> on variadic ones.
That's correct.
> But still it looks as if something very strange is happening here.
Indeed. There are several factors at play here:
1. To be able to 'push_back' into a 'vectorN' on a compiler without
'typeof' support you have to have a 'vectorN+1' definition included.
If you don't have it included, you will get a compilation error:
#include "boost/mpl/vector/vector10_c.hpp"
#include "boost/mpl/push_back.hpp"
using namespace boost::mpl;
typedef push_back<
vector10_c<int,1,2,3,4,5,6,7,8,9,10>
, int_<11>
>::type t;
> test.cpp(9) : error C2039: 'type' : is not a member of 'boost::mpl::push_back<Sequence,T>'
> with
> [
> Sequence=boost::mpl::vector10_c<int,1,2,3,4,5,6,7,8,9,10>,
> T=boost::mpl::int_<11>
> ]
... except when you don't, like you experienced first hand.
2. The reason your code doesn't result in the error above is that
it doesn't invoke 'push_back' directly -- it supplies it as an
output operation to the inserter.
When the inserter does eventually invoke 'push_back' on
'vector20<...>' (at the last transformation step), the
invocation is done through the 'apply' metafunction and is
basically equivalent to this:
typedef apply< push_back<_1,_2>, vector20<...>, C21 >::type t;
This, of course, shouldn't make any difference and should produce
the same error, but it doesn't. Instead, it results in 't' being
an internal implementation type which has nothing to do with vector
(nor any other sequence).
That's where the bug (the absence of proper diagnostics) is.
3. Due to the lack of diagnostics in the previous step,
'ElementClassesAsPointer' ends up being an typedef to an internal
type that is not a sequence. On a surface, it seems that passing
a non-sequence to 'for_each' should still result in a compilation
error. In fact, 'for_each' and other algorithms/metafunctions in
the library almost never explicitly check conformace of the
provided template parameters to their corresponding concepts.
In absence of the explicit concept comformance verification,
invocation of a sequence algorithm on a non-sequence type
is almost guaranteed to be a no-op because of the following piece of
the 'begin'/'end' specification
():
[..] If the argument is not a Forward Sequence, returns void_.
Thus the observed effect of
boost::mpl::for_each<ElementClassesAsPointer> (*this);
in your example.
The absence of concept checks can be argued to be another bug.
I've just checked in a fix for the bug in step #2, and also a number of related
fixes that together greatly reduce the chance of silent diagnoctic failures in
other similar situations --.
If you are not compiling against the trunk, it should be safe to apply the diff
to your local Boost version as well.
The proper concept checks is something that will probably have to wait
until C++0x is out in the field.
Thank you for taking time to report this,
-- Aleksey Gurtovoy MetaCommun
|
https://lists.boost.org/boost-users/2008/06/37335.php
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Hello everyone, welcome to this new article, where we are going to explore the React Native Netinfo Api.
In this article we will try to make a simple Wifi Network Adapter, That gets the current network data in realtime.
Our final result will look like this.
It’s pretty straightforward, and every time the strength of the Network changes, This icon will change.
By Strength, or to turn off, if the internet is not reachable.
Let’s Get Started
To get started make a new project, either using Expo cli or React Native Cli.
For this example, I will be using an expo project, but if you prefer to use React Native Cli.
You can follow the instructions from Here.
Environment Setup
Install Netinfo from expo
expo install @react-native-community/netinfo
And import it
import NetInfo from '@react-native-community/netinfo';
I will also be using Expo Icons, they come preinstalled with any Expo project, so no need to install them.
In particular we will use MaterialCommunityIcons, Which has network icons for all strength.
Then import it
import { MaterialCommunityIcons } from '@expo/vector-icons';
How does it React Native Netinfo work ?
Simply React Native Netinfo, allows us to get data related to the network we are connected to.
It provides extensive information that we can use from the name of the wifi, strength to Cellular Generation, carrier name and bluetooth etc
For this example we will only need to use the wifi related data, is it connected and the strength.
But you can use it to to get data from the table below.
Get Wifi Data
To get the wifi network data, we can add an event listener, to netInfo, and subscribe to data changes in realtime.
NetInfo.addEventListener(state => { console.log('Connection type', state.type); console.log('Is connected?', state.isConnected); });
Or, you can fetch data once
NetInfo.fetch().then(state => { console.log('Connection type', state.type); console.log('Is connected?', state.isConnected); });
The state object thats being returned, contain the information we need.
And from the details section, if the network is wifi, we can get.
Workflow
So, Let’s create state field to update data to.
I will add 3 fields
const [isInternetReachable, setIsInternetReachable] = useState(false) const [strength, setStrength] = useState(0) const [icon, setIcon] = useState("network-strength-off")
isInternetReachable: Wether the internet is being accessed or not
strength: The strength of the connection
icon: Icon for the UI
I will make a small helper function to get different Icons, based on the data we have in state.
const getNetworkIcon=()=> { if(!strength){ return "network-strength-off" }else if (strength <= 25) { return "network-strength-1" }else if (strength <= 50) { return "network-strength-2" }else if (strength <= 75) { return "network-strength-3" }else if (strength <= 100) { return "network-strength-4" } else if(!isInternetReachable){ return "network-strength-outline" } }
Then, we will use the useEffect hook to setup the event listener for Netinfo.
In addition, the state update.
useEffect(() => { NetInfo.addEventListener(state => { setStrength(state.details.strength) setIsInternetReachable(state.isInternetReachable) setIcon(getNetworkIcon()) }); },[]);
With this hook, we are using componentDidUpdate, here you can find quick start on React Native Hooks guide.
And That was it, our app now will update based on network data in realtime.
Thank you for your time reading my.
Hello
Nice explanation ..Thank you
Thank you, Happy you liked it.
|
https://reactnativemaster.com/react-native-netinfo-example/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
then wrap in a Play
Promise, and then map to your own result type.
Below is an example of using our
HelloActor with the ask pattern:
import akka.actor.*; import play.mvc.*; import play.libs.F.*; import javax.inject.*; import static akka.pattern.Patterns.ask; @Singleton public class Application extends Controller { final ActorRef helloActor; @Inject public Application(ActorSystem system) { helloActor = system.actorOf(HelloActor.props); } public Promise<Result> sayHello(String name) { return Promise.wrap(ask(helloActor, new SayHello(name), 1000)) .map(response -> ok((String) response)); } }
A few things to notice:
- The ask pattern needs to be imported, it’s often most convenient to static import the
askmethod.
- The returned future is wrapped in a
Promise. The resulting promise is a
Promise()); } } }.libs.F; import play.mvc.*; import javax.inject.Inject; import javax.inject.Named; import static akka.pattern.Patterns.ask; public class Application extends Controller { @Inject @Named("configured-actor") ActorRef configuredActor; public F.Promise<Result> getConfig() { return F.Promise.wrap(ask(configuredActor, new ConfiguredActorProtocol.GetConfig(), 1000) ).map.libs.F.Promise; import play.mvc.*; import static play.libs.F.Promise.promise; public class Application extends Controller { public Promise<Result> index() { return promise(() -> longComputation()) .map((Integer
Found an error in this documentation? The source code for this page can be found here. After reading the documentation guidelines, please feel free to contribute a pull request.
|
https://www.playframework.com/documentation/bg/2.4.x/JavaAkka
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
background part of GUI.HorizontalSlider controls.
The padding property is used to determine the size of the area the thumb can be dragged within.
// Modifies only the horizontal slider style of the current GUISkin
var hSliderValue : float = 0.0; var style : GUIStyle;
function OnGUI () { GUI.skin.horizontalSlider = style; hSliderValue = GUILayout.HorizontalSlider (hSliderValue, 0.0, 10.0); }
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public float hSliderValue = 0.0F; public GUIStyle style; void OnGUI() { GUI.skin.horizontalSlider = style; hSliderValue = GUILayout.HorizontalSlider(hSliderValue, 0.0F, 10.0F); } }
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/2017.1/Documentation/ScriptReference/GUISkin-horizontalSlider.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
feedback 0.2.1
Feedback #
A Flutter package for getting better feedback. It allows the user to give interactive feedback directly in the app. Get it on pub.dev!
Getting Started #
Just wrap your app in a
BetterFeedback widget and supply
an
onFeedback function. The function gets called when
the user submits his feedback. To show the feedback view just
call
BetterFeedback.of(context).show();
import 'package:feedback/feedback.dart'; import 'package:flutter/material.dart'; void main() { runApp( BetterFeedback( child: MyApp( key: GlobalKey(), ), onFeedback: alertFeedbackFunction, ), ); }
Sample #
Additional notes #
You can combine this with device_info to get additional information about the users environment to better debug his issues.
- Why does the content of my Scaffold change (gets repositioned upwards) while I'm writing my feedback?
- Probably because Scaffold.resizeToAvoidBottomInset is set to true. You could set it to false while the user provides feedback.
Known Issues #
- Some draggable things like ReorderableListView look strange while dragging.
Let me know if you are using this in your app, I would love to see it.
[0.2.1] - 08. March 2020 #
Changed #
- Text instead of Icons for drawing and navigating
- round stroke caps for drawn paths
[0.2.0] - 22. February 2020 #
This is the first non-beta version.
Added #
- Colors are now more customizable
Changed #
- Usage of the ControlsColumn hides the keyboard, which should result in better usability
[0.1.0-beta] - 15. February 2020 #
Fixed #
- Screenshots are taken correctly without any transparent border
- The bottom insets are taken into consideration while the feedback view is active
Changed #
- Hopefully the new icons in the ControlsColumn are more intuitive
[0.0.1] - 8. February 2020 #
- initial release
import 'package:feedback/feedback.dart'; import 'package:flutter/material.dart'; void main() { runApp( BetterFeedback( child: MyApp( key: GlobalKey(debugLabel: 'app_key'), ), onFeedback: alertFeedbackFunction, ), ); } class MyApp extends StatelessWidget { const MyApp({Key key}) : super(key: key); : const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const), ), drawer: Drawer( child: Container( color: Colors.blue, ), ),>[ const Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.display1, ), const TextField(), FlatButton( child: const Text('Get feedback'), onPressed: () { BetterFeedback.of(context).show(); }, ) ], ), ), floatingActionButton: FloatingActionButton( onPressed: _incrementCounter, tooltip: 'Increment', child: Icon(Icons.add), ), // This trailing comma makes auto-formatting nicer for build methods. ); } }
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: feedback: :feedback/feedback.dart';
We analyzed this package on Mar 27, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.7.1
- pana: 0.13.6
- Flutter: 1.12.13+hotfix.8
|
https://pub.dev/packages/feedback
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
We previously had a discussion about the fact that
SVN_SLEEP_FOR_TIMESTAMPS was made conditional on SVN_DEBUG (activated
by --enable-maintainer-mode). The consensus, from those who spoke up,
was that this admittedly wasn't an ideal solution.
Please see the patch below which always activates this code path and
renames the env-var to something long and obviously nasty. =)
The only thing I'm wondering is that if the result from getenv()
should be cached - it depends on how often svn_sleep_for_timestamps
gets called; but getenv() isn't a cheap function. If we think it's a
worthwhile optimization, I'll add it before committing. (I actually
wrote it up first with that optimization, but then decided that I'd
yank it before posting.)
Thoughts? Comments? If no one has any negative comments, I'll go
commit. -- justin
Make SVN_SLEEP_FOR_TIMESTAMPS always available, but under a 'scary' name:
"SVN_I_LOVE_CORRUPTED_WORKING_COPIES_SO_DISABLE_SLEEP_FOR_TIMESTAMPS".
This permits 'release' builds to run the test suite far faster (48mins down to
~5mins).
* subversion/libsvn_subr/time.c
(SVN_SLEEP_ENV_VAR): define what our new name is.
(svn_sleep_for_timestamps): Remove SVN_DEBUG ifdef; also flip meaning
of env variable.
* subversion/tests/cmdline/svntest/actions.py
(no_sleep_for_timestamps, do_sleep_for_timestamps): Update to new name and
flip the meaning of the value per above.
Index: subversion/libsvn_subr/time.c
===================================================================
--- subversion/libsvn_subr/time.c (revision 28699)
+++ subversion/libsvn_subr/time.c (working copy)
@@ -80,6 +80,7 @@ static const char * const human_timestamp_format =
/* Human explanatory part, generated by apr_strftime as "Sat, 01 Jan 2000" */
#define human_timestamp_format_suffix _(" (%a, %d %b %Y)")
+#define SVN_SLEEP_ENV_VAR
"SVN_I_LOVE_CORRUPTED_WORKING_COPIES_SO_DISABLE_SLEEP_FOR_TIMESTAMPS"
const char *
svn_time_to_cstring(apr_time_t when, apr_pool_t *pool)
@@ -289,18 +290,15 @@ void
svn_sleep_for_timestamps(void)
{
apr_time_t now, then;
+ char *sleep_env_var;
-#ifdef SVN_DEBUG
- const char *env_val = getenv("SVN_SLEEP_FOR_TIMESTAMPS");
+ sleep_env_var = getenv(SVN_SLEEP_ENV_VAR);
/* Sleep until the next second tick, plus a tenth of a second for margin. */
- if (! env_val || apr_strnatcasecmp(env_val, "no") != 0)
+ if (! sleep_env_var || apr_strnatcasecmp(sleep_env_var, "yes") != 0)
{
-#endif
now = apr_time_now();
then = apr_time_make(apr_time_sec(now) + 1, APR_USEC_PER_SEC / 10);
apr_sleep(then - now);
-#ifdef SVN_DEBUG
}
-#endif
}
Index: subversion/tests/cmdline/svntest/actions.py
===================================================================
--- subversion/tests/cmdline/svntest/actions.py (revision 28699)
+++ subversion/tests/cmdline/svntest/actions.py (working copy)
@@ -24,10 +24,10 @@ import main, verify, tree, wc, parsers
from svntest import Failure
def no_sleep_for_timestamps():
- os.environ['SVN_SLEEP_FOR_TIMESTAMPS'] = 'no'
+ os.environ['SVN_I_LOVE_CORRUPTED_WORKING_COPIES_SO_DISABLE_SLEEP_FOR_TIMESTAMPS']
= 'yes'
def do_sleep_for_timestamps():
- os.environ['SVN_SLEEP_FOR_TIMESTAMPS'] = 'yes'
+ os.environ['SVN_I_LOVE_CORRUPTED_WORKING_COPIES_SO_DISABLE_SLEEP_FOR_TIMESTAMPS']
= 'no'
def setup_pristine_repository():
"""Create the pristine repository and 'svn import' the greek tree"""
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Sat Dec 29 19:04:32 2007
This is an archived mail posted to the Subversion Dev
mailing list.
|
https://svn.haxx.se/dev/archive-2007-12/0821.shtml
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Posted March 3 Hi there, I was wondering if there is a way to namespace "gsap" instance to avoid conflicts with other libraries that import gsap explicitly. Main reason behind this is because I develop WordPress themes, and so other plugins may include gsap library in different versions as well, so what I wanted is a custom build only for my theme and not interfere with other possible instances of GSAP. How can this be done? I know this may not be a good practice to include two or more versions of GSAP on page but with Wordpress this issue needs to be resolved this way I guess. Thank you Quote Share this post Link to post Share on other sites
|
https://greensock.com/forums/topic/23211-namespace-gsap-build/?tab=comments
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Annotations your way in RESTEasy
Annotations your way in RESTEasy
Join the DZone community and get the full member experience.Join For Free
Download Microservices for Java Developers: A hands-on introduction to frameworks and containers. Brought to you in partnership with Red Hat.
I've been thinking about how to allow you to process Annotations "your way" in RESTEasy. As a user base matures, it asks for more functionality, and allowing that extension via the Open/Closed Principal is the way to go. I added a feature in RESTEasy that will allow you to accomplish some of this.
Here's an example of how I used that flexibility to inject a Spring request scoped bean into a RESTEasy controlled "Resource" method. The end result looks like this:
@Path("/")
public class TestBeanResource{
@GET
public String test(@Qualifier("testBean") TestBean bean){
return bean.configured;
}
}
Some complex scenarios that I worked on required complex rich-domain request-scoped beans. I had to do some whacky stuff in my business logic to make that happen. I really wanted a cleaner separation of concerns, and kept on wanting this kind of flexibility. Jersey (which I use in my day job) didn't have this kind of flexibility (since it very well might be an academic pursuit). I'm not sure how to build it there, so I tried it in RESTEasy, and it worked well from a proof-of-concept perspective.
In order to facilitate that behavior I needed to add the following bean:
@Provider
public static class QualifierInjectorFactoryImpl extends InjectorFactoryImpl implements
BeanFactoryAware {
BeanFactory beanFactory;
public QualifierInjectorFactoryImpl(ResteasyProviderFactory factory) {
super(factory);
}
public ValueInjector createParameterExtractor(Class injectTargetClass,
AccessibleObject injectTarget, Class type, Type genericType, Annotation[] annotations) {
final Qualifier qualifier = FindAnnotation.findAnnotation(annotations, Qualifier.class);
if (qualifier == null) {
return super.createParameterExtractor(injectTargetClass, injectTarget, type,
genericType, annotations);
} else {
return new ValueInjector() {
public Object inject(HttpRequest request, HttpResponse response) {
return beanFactory.getBean(qualifier.value());
}
public Object inject() {
// do nothing.
return null;
}
};
}
}
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
}
Look for Spring's @Qualifier and inject a bean. It really doesn't look all that painful to me, but beauty is in the eye of the beholder.
What do you think? Here's the full TestCase and here's the Spring Configuration file
From
Download Building Reactive Microservices in Java: Asynchronous and Event-Based Application Design. Brought to you in partnership with Red Hat.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/annotations-your-way-resteasy
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Mon, Aug 05, 2013 at 06:25:44PM -0400, Rich Felker wrote: > Hi, > > [I'm resending this to linux-api instead of linux-kernel on the advice > of Joseph Myers on libc-alpha. Please see the link to the libc-alpha > thread at the bottom of this message for discussion that has already > taken place.] As told you earlier on linux-kernel just send a patch with your semantics to lkml. We're not going to reserve a value for a namespace that is reserved for the kernel to implement something that should better be done in kernel space.
|
https://sourceware.org/ml/libc-alpha/2013-08/msg00033.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Actuator: different techniques
To control the organs and limbs of a robot, a series of technologies are being used.
How to animate a robot?
Piezoelectric properties are exploited by the electrostatic actuator. A ceramic rod lengthens in proportion to the voltage applied to the poles at both ends. This technique is used to actuate the print heads of ink jet printers. It seems more devoted to small objects and could move micro-robots.
You can make a actuator on this principle. The ceramic is formed of polymer-metal ion which have the property to deform at very low voltages (less than 2 volts) and to be a good conductor.
With magnetostriction, a magnetic field distorts a ferromagnetic material, in other words, made from an alloy comprising magnets. The very low deformation of the material limits this process to sensitive devices, but since we also need a transducer of large size, it is not suitable for micro-robots. The most suitable material is iron-cobalt alloy with or without addition of nickel. The elongation is up to 5 times higher than that obtained with piezoelectric. The force obtained by such a deformation is huge but the extension is barely visible to the naked eye.
The principle of the air pump is more modern, at least in robotics. By blowing air into a container, it is inflated. And sucking the air, it retracts. This property has been used to create a grippable hand, without further action. This implies a network of channels to transfer the air where it is necessary to form a movement of the limb. However, this technique does not seem ideal for a large-scale development.
Now we will see two more appropriate technologies to robotics and more affordable.
Shape-memory
Shape-memory is used in nanotechnology or to operate small mechanisms. Alloy (SMA) subjected to a rise in temperature changes shape and returns to its original shape at ambient temperature. Its effectiveness depends in fact on the speed at which we can produce a temperature rise and cooling. This principle is implemented in the Flexinol.
Flexinol is a wire muscle which retracts at heat and has the advantage of producing a large force and the disadvantage of slowness. Compression distance is around 5%, which does not mimic the action of a muscle, but it is used in combination with a mechanism (the same principle as a pulley or gear box) to increase distance. This material does not return to original size after the action, we must associate it with a spring or a cons-Flexinol to return to position. You can not really use it in robotics due to its slowness but to some parts of the robot and it is inexpensive (less than $ 10 for 1 m of wire supporting 2 kg). The manufacturer is Dynalloy.
The Firgelli actuator
The technology used has nothing to do with what we have seen previously. It is an ordinary electric motor coupled to a screw to convert the rotation into elongation or shrinkage! It returns to its initial position when you reverse the polarity. According to the manufacturer, it is possible to control the actuator from Labview software when it is connected to a USB port using a control card.
The above model L12 which costs a hundred dollars has a robotic use. It is between 5 and 10 cm, weighs 28 g for the lightest model and performs a distance of two inches per second which is very fast. It requires a voltage of 6 V that can be reduced, which then reduces its speed.
Building an actuator
After acquiring a 3D printer, it becomes possible to make its own actuators. This allows you to choose the shape and size desired for the mechanical components, and to choose the motor force and tension.
The construction of an actuator can be inspired form the following simplified diagram:
The current is fed via a control card, which is used to invert the polarity to retract the piston, which is attached to the members of the robot. We can separate the actuator form the member using a wire sliding in a sheath (like the brakes on a bicycle), in which case we must add a wire for the antagonist opposite movements of contraction and extension.
There are Arduino cards to assemble or buy for controlling an electric motor and a dedicated library: AF Motor. Once included, we can use a such code:
#include <AFMotor.h>
AF_Stepper motor(20, 2); motor.setSpeed(300); ...
References
Artificial muscles using polymers. This company produces artificial muscles based on electro-active polymers. Its products are used by vivitouch joysticks to give feedback sensations.
Arduino. Open source platform to build your own control cards.
|
https://www.scriptol.com/robotics/personal/actuator.php
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Want to analyze the accuracy of your inventory? Using Extended Warehouse Management (EWM) on S/4 HANA? Keep reading and I’ll show you how!
Step 1 – Where’s the Beef?!…I mean data.
If you look in Eclipse or use the View Browser app in Fiori Launchpad, you’ll notice delivered ABAP CDS Views I_PHYSINVTRYDOCHEADER and I_PHYSINVTRYDOCITEM. At this point, you may be thinking…great! Delivered content, let’s start pulling data! Unfortunately, there is a catch. These delivered views read the Inventory Management (IM tables), making them not so helpful if you are using EWM.
EWM uses an entirely different set of tables in S/4 than IM or WM. The tables in question are in the /LIME namespace. So instead of IKPF, ISEG (IM tables) or LINK, LINP (WM tables), data is stored in tables /LIME/PI_LOGHEAD, /LIME/PI_LOG_ITEM, /LIME/PI_DOC_IT, /LIME/PI_PAR_BIZ, /LIME/DOC_TB, /LIME/PI_IT_BIZ, etc. The records are joined using a GUID instead of a document number and line item.
For my requirements, I’m going to want the following fields:
- Document Number, Line Item, Log Category, Document Type
- Reference Document, Reference Document Type
- Count Date, Count User
- Warehouse, Storage Type, Storage Bin
- Material
- Booked, Counted and Difference Quantities, Units and Values
Following are the tables mentioned, which some notable fields listed:
- Table /LIME/PI_LOGHEAD contains the GUID (GUID_DOC), the physical inventory document (DOC_NUMBER), the warehouse (LGNUM), and the document year (DOC_YEAR).
- Table /LIME/PI_LOGITEM contains the GUID, as well as, the item (ITEM_NO), log category (LOG_TYPE), reference document (REF_DOC_NO), and reference document type (REF_DOC_TYPE).
- Table /LIME/PI_DOC_IT contains the GUID, item, document type (DOC_TYPE), document status (DOC_STATUS), as well as, the count date (COUNT_DATE), count user (COUNT_USER, and direction of movement (DIF_DIRECTION) – Important! Which way is the count being adjusted? This is used to calculate either absolute difference or relative difference.
- Table /LIME/PI_PAR_BIZ contains the GUID, item, warehouse, storage type (LGTYP), and storage bin (LGPLA).
- Table /LIME/PI_DOC_TB contains the GUID, item, item type (ITEM_TYPE) – Important! This identifies if it is the booked, counted, or difference quantity, unit (UNIT), and quantity (QUANTITY).
- Table /LIME/PI_IT_BIZ contains the GUID, item, and material id (MATID)
Want to see the material that is being counted? You’ll need to join an additional set of tables to get to the material number. The material is stored in the /LIME/PI_IT_BIZ as a code. That code can be translated into the material product code in /SCMB/MATID_MAP and then translated into MATNR using /SAPAPO/MATKEY. Use the following joins to get from the material in the physical inventory document to the traditional material number.
- /LIME/PI_IT_BIZ.MATID = /SCMB/MATID_MAP.MATID_X16
- /SCMB/MATID_MAP.MATID_C22 = /SAPAPO/MATKEY.MATID
- /SAPAPO/MATKEY contains field MATNR, whew!
Here is an example from a test system:
In this example, material 1814957 is being counted.
You will want to categorize the counts using the cycle count indicator. When using EWM, this indicator is stored in table /SAPAPO/MATLWH. Plug in the MATID from /SAPAPO/MATKEY and you will get the cycle count indicator (CCIND). Depending on your configuration, you will also want to specify the SCUGUI and ENTITLED_ID.
If you need the SCUGUID and ENTITLED_ID, tables /SCMB/TOENTITY and /SCWM/T300_MD will help you get these GUIDs from the warehouse and entitled party which are available in /LIME/PI_IT_BIZ.
I look forward to your questions and comments!
Next up, Step 2 – Building CDS Views
Check back next time for the steps to build custom CDS views to leverage these tables!
|
https://blogs.sap.com/2017/08/22/inventory-analysis-with-extended-warehouse-management-in-s4-hana-with-abap-cds-views/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
A JavaScript WAT and monoidal folds
We suppose all programming languages have the occasional watWe believe the origin of this use of the term is from this lightning talk by Gary Bernhardt., some function or behavior that is surprising and makes you think, “wait, what?” From time to time, one of these will go viral on Twitter, and we all have a good laugh-cry at how absurd programming can be.
When this tweet first appeared in our timelines,
Fortunately, JavaScript has always supported #alternativefacts
Math.min() < Math.max() => false
we had a chat about it, compared it to Haskell’s equivalents, and tried to figure out how another language might do better.
We’re going to use this as an excuse here to talk about folds, monoids, and why you should care.
The WAT
> Math.min() < Math.max() false
This may or may not surprise you, depending on how you think of these functions. In JavaScript,
Math.min JavaScript:
Math.min() documentation. returns one of the following:
- The smallest number passed to it (if all of the arguments are numbers);
NaNif any parameter isn’t a number;
Infinitywhen applied to zero arguments.
Whereas
Math.max JavaScript:
Math.max() documentation. returns:
- The largest number passed to it;
NaNif any parameter isn’t a number;
-Infinitywhen applied to zero arguments.
That’s how you get the perhaps unexpected behavior above:
Math.min() is
Infinity and
Math.max() is
-Infinity, so the expression reduces to
Infinity < -Infinity. This is false, because infinity is greater than negative infinity.
In Haskell, the situation is different but hardly better: Haskell:
minimum documentation
Haskell:
maximum documentation
λ> minimum [] < maximum [] *** Exception: Prelude.minimum: empty list
What’s going on here?
JavaScript has defined values for these functions to return when they are given an empty collection of arguments. But in Haskell, there is no such value defined for these functions.This isn’t, as it happens, quite true. There is a value they could return (
Nothing), if the return type were
Maybe a instead of
a. But they were written to throw an exception anyway – for “historical reasons,” we assume. See here for source code. How they should behave when they are given no arguments is undefined, so it just spits this nasty exception at us.
JavaScript versus Haskell
We will be comparing Haskell with JavaScript, but one difficulty with that is Haskell has two functions where JavaScript has one – or, Haskell has four where JavaScript has two. JavaScript has
Math.min and
Math.max but Haskell has
minimum,
min,
maximum, and
max. Let’s take a moment to understand why. To avoid doubling the discussion, we’ll focus on the “minimum” functions.
Haskell has two functions,
min and
minimum, that are closely related but different.
min is the simpler of the two. The
min function takes exactly two arguments. This is the type signature of
min:
- This double colon syntax means “has the type”. All functions and all values in Haskell have a type, and this syntax is used to annotate a name with a type.
- This is the first argument. It’s a value of type
Double.
- This is the second argument, also a value of type
Double.
- This is the return or result value. For
min, it will be the lesser of the two arguments.
But Haskell makes a stronger distinction between a thing and a list of those things. Because we have static types that resolve at compile time, we must make the distinction up front about whether we have a single value or an arbitrarily long list of them. And so, we have the
minimum function; in contrast to
min, the
minimum function takes one argument, but the argument is a list and can be as long as you like.
This takes a list of
Double numbers and returns one of those values. Since this is the
minimum function, it returns whichever is smallest.
minimum reduces the list to a single value by recursively applying
min to pairs of elements from the list. In this way,
minimum is more complicated than
min and relies on
min for its implementation.
Instead of providing only the more general
minimum function, Haskell’s
base package provides you with both. Most of the time when you want
min, it’s because you’re implementing something that will work over a large collection of values, such as a list of them, but
min and the other basic functions in the
Ord typeclass are exposed in the
base library to provide those building blocks and also to provide a conceptual basis for what ordering means.
We got a bit ahead of ourselves, because above we showed you the types of the
min and
minimum functions specialized to the type
Double. In fact, they work with many types – most numbers, and some other things, such as alphabetic characters, as well. What all those types have in common is a concept of being ordered. The
Ord typeclass defines the set of functions that represent the essence of what it means for values to be orderable or comparable. A partial list of the functions in
Ord:
class Eq a => Ord a where (<) :: a -> a -> Bool (<=) :: a -> a -> Bool max :: a -> a -> a min :: a -> a -> a
The
Ord typeclass declares these functions but does not implement them because the implementation varies based on what
a is. If you can implement these functions sensibly for a given type,You don’t have to write implementations for all the functions in a typeclass. Many typeclasses, including
Ord, have one or two functions that you must implement for your type, and the rest can be inferred or derived from those. So, if you implement
(<=) for a type, you get the rest of the ordering functions for free. then that type is orderable and has access to all of these functions and, more generally, to all functions that are defined to work with
Ord constraints, which includes
sort.
JavaScript made a different choice about how this should all work than Haskell did, so
Math.min can take different numbers of arguments where Haskell’s versions cannot. Haskell does not really have an equivalent to the variadic syntax of JavaScript, although lists and some similar data structures can be used similarly.
In summation, please peruse this list of example usages of
Math.min and consider corresponding ways to write the same expression in Haskell:
† This one isn’t as much of a direct translation as the others, because there isn’t quite an analog in Haskell to variadic functions in JavaScript.
Folds reduce
Regardless of how the JavaScript functions may be implemented, we can see from their behavior that they are at least conceptually similar to the Haskell functions. They take a collection of items, apply an operation that compares two values at a time over that collection, keeping track of the current state, and return a final result. In Haskell, we do this with a fold; you might know the concept of folding in JavaScript by the name
reduce. JavaScript:
Array.reduce() documentation
We can implement the
minimum and
maximum functions in Haskell to mimic the behavior of the JavaScript functions using a function called
foldr that is similar to
reduce. Folds in Haskell can be used on data structures other than lists, but we’ll stick with talking about lists in this article as that most closely mimics the variadic nature of the JavaScript functions.
foldr takes three inputs. Haskell:
foldr documentation
- The first argument is a binary function. It takes two arguments, one of type
aand one of type
band returns a value of type
b. The two types,
aand
b, could be different types, but they are often the same type.For example, if we are using
foldrto sum a list of
Integers, then both
aand
bare
Integer.
ais the type of the values in the list, and
bis the type of the “start” or identity value.
- The second is some starting or neutral value for the operation. We’ll address this in more detail in a moment.
- The third is a collection, such as a list, of values of type
a. The square brackets are list syntax in Haskell.
The second argument – that neutral initial value – is worth making a further comment about. When we talk about folds in Haskell, that argument is commonly called the “start” or “acc” value – “acc” being short for “accumulator.” It is often the identity value of the operation, but it’s also where the “state” of the computation “accumulates” as the function is recursively applied over the collection.
An identity value for an operation is a value that doesn’t change the value of any other arguments:
- Adding any number to zero gives you back that number; zero is the identity value of the binary operation addition for integers and some, but not all, types of numbers.
- Multiplying by one always gives you the number you started with; one is the identity value of the binary operation multiplication defined over integers and so on.
- Adding any list to an empty list and you just get back the list you started with; an empty list is an identity value for the binary operation list concatenation.
Addition and multiplication over integers and concatenation of lists are monoids.
Folds, identities, monoids
When we fold, the operation we use to “fold up” the collection is a binary operation, such as addition or
min or
max.
(+) :: Num a => a -> a -> aHaskell:
(+)documentation
min :: Ord a => a -> a -> aHaskell:
mindocumentation
max :: Ord a => a -> a -> aHaskell:
maxdocumentation
By binary we mean that it takes two arguments. It is also usually the case that you could reparenthesize a succession of such operations and not change the outcome (the property known as associativity).Some folding functions, such as
foldr, may be used with non-associative folding functions, such as subtraction or exponentiation. Some folds, such as
foldMap which we’ll look at in the next lesson, can only work with associative operations, such as addition or multiplication. Such folds are monoidal as they require a monoid to work. This monoid part will be important when we look at the actual functions in Haskell.
We can consider a set (such as integers) together with such an operation over it as a single structure. If there is an identity or neutral element of the set for the operation, that structure is called a monoid.If there is not an identity element, the structure is merely a semigroup. Every monoid is also a semigroup, but not every semigroup is a monoid.
In JavaScript, finding the minimum or maximum of a collection of values is a monoidal operation: it uses a binary, associative operation to reduce the collection, and there are identities defined for those operations;
Infinity for
Math.min and
-Infinity for
Math.max. Having those identity values means that when no arguments are passed to the function, there is still a value it can return.
Let’s take a moment and convince ourselves that
Infinity and
-Infinity are identity values for these functions. If we think of an identity value as the value that always returns the other argument (to a binary function) unchanged, then they do suit this purpose.
The
min function compares two values of the same type and returns the smaller, or minimum, of the two. It’s the function that we’ll use to fold up our list of floating point numbers. If you are comparing two values, any value will be smaller than
Infinity, so passing
Infinity in as one of the arguments to
min
Infinity and
-Infinity are not Haskell code, so we have to use these division equations for those values. will return the other value unchanged. No matter how big your number is, it will be less than
Infinity.Yes, yes, unless it is Infinity, in which case it’s just the same.
λ> min (100000) (1/0) 100000.0
We’ll ignore some details of how folds recurse in Haskell, but we can think of it peeling one value of type
a off the list, passing it as one argument to the folding function, and then updating the “state” in the form of changing the “acc” value of type
b for the next iteration. In the first application of the binary function, you need a “start” value to serve as the other argument (along with one of the values in the list) since we only peel one value out of the collection at a time.
Let’s look at a hypothetical example. We can write a
sum function like this:
The first argument to
foldr is a binary operation,
(+). The second argument is the identity value for integer addition. The third is the list of integers.
foldr begins evaluating its function applications at the end of the listThis isn’t important for the current purpose, but the
r in
foldr stands for “right” meaning “right-associative”. It groups the function applications from the right; that’s why it’s the last element of the list that gets passed to the addition operation first. The left-associating variant, called
foldl exists, but we do not usually recommend using it (the reasons are really out of the scope of this article). so it takes the last element of the list and the “start” value – 0 in this case – as the two arguments to the first addition.
evaluates like this:
If we pass an empty list, analogous to an empty set of arguments, to the folding function, it will return that start or neutral value as the result. So, the start value isn’t just the identity for the operation or the place where the state of each computation gets tracked; it’s also our default return value in case the function doesn’t have any other arguments to apply to.
JavaScript in Haskell
We can implement
maximum and
minimum in Haskell in ways that are analogous to the JavaScript by making that start value
Infinity or
-Infinity. Then when we pass it an empty argument, it will return that value and give us the same behavior as the JavaScript. Our binary operation will be
max and
min, which each compare two values of the same type and return the greater or lesser.
λ> :type min min :: Ord a => a -> a -> a λ> :type max max :: Ord a => a -> a -> a λ> min 3 6 3
So, we have
foldr,
min, and an identity. Now we can implement
minimum, the JavaScript way.
That should be equivalent to the following in JavaScript:
We could implement a
maximum function similarly by changing the binary function from
min to
max and the neutral value from
(1/0) to
(-1/0). Using those in the REPL, we can see the desired behavior:
λ> minimum [] Infinity λ> maximum [] -Infinity λ> minimum [] < maximum [] False
However, in Haskell,
minimum and
maximum are written to be more general than this. The way we’ve written it here will only work with certain types of numbers,Haskell has a lot of types of numbers, including bounded
Int types, unbounded
Integers, and various fractional and floating point types. Since integer division by zero results in a “divide by zero” exception, rather than “infinity”, we can’t use these functions as written with integers, but it seems clear we’d want to have a
minimum and
maximum function that could. but the real functions work with values of many types. Next we’ll look at a way to preserve the generality of the functions while keeping the JavaScript behavior.
|
https://typeclasses.com/javascript/monoidal-folds
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
I'm new to python and I'm having trouble trying to sort a list correctly. I have a map of strings->Word objects. A word object contains a list of all the places in a file where the word occurs. So in the Word class I have this method:
def _cmp_(self,other): print'here' return other.num_occurences()-self.num_occurences()
Then elsewhere I have this code:
values = map.values()#list of Word objects values.sort()#need to sort based on length of occurences
But it isn't sorting based on the cmp method. I put that print statement in the cmp method so that if values.sort() called the cmp method it would print 'here' but nothing prints.
|
https://www.daniweb.com/programming/software-development/threads/151784/sorting-problem
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
I recently found a situation where I needed to get the root element of a Silverlight control. Unfortunately, Silverlight controls only have a Parent member and not a Top or Root member. If your control is embedded in multiple controls, you cannot easily get the root and will probably end up writing some ugly casting code that looks like this.DO NOT USE THIS CODE...KEEP READING!
private UserControl GetParent()
{
return (UserControl)((Grid)((Border)((StackPanel)((UserControl)((Grid)((StackPanel)this.Parent).Parent).Parent).Parent).Parent).Parent).Parent;
}
Solution 1 – Implement a recursive lookup that finds the top parent and returns it as a UserControl.
1.) Add a method extensions that returns the parent UserControl.
public static class SilverlightExtensions
{
public static UserControl GetRoot(this FrameworkElement child)
{
var parent = child.Parent as FrameworkElement;
if (parent == null)
if (child is UserControl)
{
return (UserControl)child;
}
else
{
throw new Exception("The root element is an unexpected type. The root element should be a UserControl.");
}
return parent.GetRoot();
}}
2.) Access the parent by using the method extension:
this.GetRoot().Resources["SampleText"] = "x y z";
((TextBlock)this.GetRoot().FindName("TitlePage")).Text = "New Title Page";
Solution 2 – Add a property to the child control that references the parent. (Recommended)This is less expensive than a recursive loop, but you do have to always remember step 1 or else it will not work.
1.) Add a MyParent property of type UserControl in the child control’s class.
public UserControl MyParent { get; set; }2.) In the parent control’s constructor, add a line that populates the child control’s MyParent property with itself.
public ParentUserControl()
{
InitializeComponent();
((ChildUserControl)this.FindName("childControl")).MyParent = (UserControl)this;
}3.) Access the parent root using the MyParent property.
MyParent.Resources["SampleText"] = "x y z";
((TextBlock)MyParent.FindName("TitlePage")).Text = "New Title Page";
Note: You should be able to take a similar approach with WPF. One other alternative is to raise an event to the parent UserControl. Unfortunately, I haven’t seen any easy or clear-cut examples on how to do this.
Both ways will not work if I want to find parent for items in stack panel or other itemsource. When I have DataTemplate for item in StackPanel parent of root grid for datatemplate will be null in runtime :(.
John. Live rock.'ll Tech
Here's a solution that may be even easier. In the child control, set the 'Tag' property equal to the parent.
1) First, give the parent control a name in XAML. e.g. <UserControl x:Name="MainPageControl"....
2) Then, in the child control, set Tag="{Binding ElementName=MainPageControl}"
Then, you can get a reference to the parent just by using the 'Tag' property of the child control.
|
http://johnlivingstontech.blogspot.com/2009/11/silverlight-get-top-level-usercontrol.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
MVVM - IOC Containers and MVVM
By Laurent Bugnion | February 2013
Since the early days of object-oriented programming, developers have faced the issue of creating and retrieving instances of classes in applications and libraries. Various solutions have been proposed for this problem. For the past few years, dependency injection (DI)and inversion of control (IOC) have gained popularity among coders and have taken precedence over some older solutions such as the Singleton pattern.
The Singleton pattern is a convenient way to create and expose an instance of a class, but it has a few disadvantages. For example, the code in Figure 1 shows a class exposing a property following the Singleton pattern.
Here’s a few things to note about the code in Figure 1:
- The constructor of the class is private, meaning that the only way to create the instance is by using the Instance static property. We could change that by making the constructor internal or even public.
- The instance is created on demand, which is generally a good thing, but sometimes we want the instance to be ready as soon as the application starts. We can do this by calling the Instance property as soon as possible.
- More annoying, there is no way to delete the instance, and in certain circumstances this can cause memory leaks. One way to solve this issue is to add a static Delete method to the class.
- Instances other than the main Instance property can be created, but each would require different accessors, either properties or methods. The Instance property does not allow any parameter to be passed to the constructor.
With these improvements and a few others, we could transform this pattern into something useful and flexible. But an even cleaner approach is to remove this infrastructure code from each class we implement and instead use an external object that acts like a cache for the instances we need in various parts of our applications.
This is where IOC containers come in handy. The term inversion of control means that the act of creating and keeping the instance is not the responsibility of the consuming class anymore, as it is in traditional object-oriented programming, but is instead delegated to an external container. While this is not an obligation, the cached instances are often injected into the consumer class’s constructor or made available through a property of the consumer. This is why we talk about dependency injection, or DI. Note that dependency injection is not necessary for IOC containers to work, but it is a convenient way to decouple the consumer from the cached instances and from the cache itself.
For example, a classic application would be composed as shown in Figure 2. In this example, the programmer decided to provide two implementations of the service, one for run time and one for test purposes. In some cases, a developer might even want to provide a third implementation for design-time data—for example, to be used in Expression Blend or in the Visual Studio designer.
public class Consumer { private IDataService _service; public Consumer() { if (App.IsTestMode) { _service = new TestDataService(); } else { _service = new DataService(); } } } public interface IDataService { Task<DataItem> GetData(); } public class DataService : IDataService { public async Task<DataItem> GetData() { // TODO Provide a runtime implementation // of the GetData method. // ... } } public class TestDataService : IDataService { public async Task<DataItem> GetData() { // TODO Provide a test implementation // of the GetData method. // ... } }
With dependency injection, the code becomes much cleaner, as shown in Figure 3. This is the core principle of DI: another class, somewhere, is taking care of creating the correct implementation of the service, and injecting it.
Of course, we need to take care of creating the service and injecting it into the consumer, and this is where the IOC container becomes useful.
Using an IOC Container
Quite a few IOC containers are available on the market. For instance, Unity (by Microsoft), StructureMap and Castle Windsor (two open source projects) are very popular .NET-based IOC containers and available for multiple platforms. In this article, I’ll use MVVM Light’s SimpleIoc to illustrate the usefulness of an IOC container in MVVM-based applications.
Note: The sample application provided with this article shows all the techniques detailed using MVVM Light and the SimpleIoc container. The application is a simple RssReader that pulls a list of the latest articles from CNN.com and displays them. The app has two pages: MainPage shows the list of articles with their title. On click, the app navigates to a DetailsPage, which shows the article’s title, a summary, as well as a link to the main story on the CNN website. The sample package contains two implementations of the app, one for Windows 8 and the other for Windows Phone 8.
SimpleIoc is, like the name says, a rather simple IOC container, which allows registering and getting instances from the cache in an uncomplicated manner. It also allows composing objects with dependency injection in the constructor. With SimpleIoc, you can register the IDataService, the implementing class or classes and the consumer class, as shown in Figure 4.
In clear text, the code in Figure 4 means the following: if the application is in test mode, every time anyone needs an IDataService, pass the cached instance of TestDataService; otherwise, use the cached instance of DataService. Note that the action of registering does not create any instance yet—the instantiation is on demand, and only executed when the objects are actually needed.
The next step is to create the ConsumerWithInjection instance with code such as that shown in Figure 5.
When the property in Figure 5 is called, the SimpleIoc container will run the following steps:
- Check whether the instance of ConsumerWithInjection is already existing in the cache. If yes, the instance is returned.
- If the instance doesn’t exist yet, inspect the ConsumerWithInjection’s constructor. It requires the instance of IDataService.
- Check whether an instance of IDataService is already available in the cache. If yes, pass it to the ConsumerWithInjection’s constructor.
- If not, create an instance of IDataService, cache it and pass it to the ConsumerWithInjection’s constructor.
Of course, since the instance is cached, the call to the GetInstance method can be executed multiple times in various parts of the application. It is not strictly necessary to use the constructor injection shown earlier in Figure 3, although it is an elegant manner in which to compose objects and to decouple the dependencies between objects.
The GetInstance method can also return keyed instances. This means that the IOC container can create multiple instances of the same class, keeping them indexed with a key. In that way the IOC container acts as a cache, and the instances are created on demand: when GetInstance is called with a key, the IOC container checks whether an instance of that class is already saved with that key. If it is not, the instance is created before it is returned. It is then saved in the cache for later reuse. It is also possible to get all the instances of a given class, as shown in Figure 6. The last line of that code returns an IEnumerable<ConsumerWithInjection> containing the four instances created earlier.
// Default instance var defaultInstance = SimpleIoc.Default.GetInstance<ConsumerWithInjection>(); // Keyed instances var keyed1 = SimpleIoc.Default.GetInstance<ConsumerWithInjection>("key1"); var keyed2 = SimpleIoc.Default.GetInstance<ConsumerWithInjection>("key2"); var keyed3 = SimpleIoc.Default.GetInstance<ConsumerWithInjection>("key3"); // Get all the instances (four) var allInstances = SimpleIoc.Default.GetAllInstances<ConsumerWithInjection>();
Various Ways to Register a Class
The most interesting feature of an IOC container is the way that a class can be registered in order to create the instances. Each IOC container has certain features that make it unique in the way that classes are registered. Some of them use code-only configuration, while others can read external XML files, allowing for great flexibility in the way that classes are instantiated by the container. Others allow for powerful factories to be used. Some, like MVVM Light’s SimpleIoc, are more simple and straightforward. Which IOC container to use is a decision that rests on a few criteria, such as the team’s familiarity with a specific container, the features needed for the application, and others.
Registration can occur in a central location (often called the service locator), where important decisions can be taken, such as when to use the test implementation of all the services. Of course, it is also possible (and often necessary) to register some classes in other locations in the application.
In some MVVM applications (and notably apps based on the MVVM Light toolkit), a class named ViewModelLocator is used to create and expose some of the application’s ViewModels. This is a convenient location in which to register most of the services and service consumers. In fact, some ViewModels can also be registered with the IOC container. In most cases, only the ViewModels that are long-lived are registered in the ViewModelLocator class. Others may be created ad hoc. In navigation apps such as Windows 8 or Windows Phone apps, these instances may be passed to a page when the navigation occurs. In some cases, SimpleIoc may be used as a cache for keyed instances in order to make this step easier, as we will see later in this article.
To make the IOC container easier to swap with another (should the need arise), many such containers (including MVVM Light’s SimpleIoc) use the Common Service Locator implementation. This relies on a common interface (IServiceLocator) and the ServiceLocator class, which is used to abstract the IOC container’s implementation. Because SimpleIoc implements IServiceLocator, we can have the code shown in Figure 7 in our application, which executes in the same manner as the code in Figure 6. If at a later point another IOC container is selected for the application, only references to the SimpleIoc class need to be swapped. The references to ServiceLocator, on the other hand, can remain as is.
ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default); // Default instance var defaultInstance = ServiceLocator.Current.GetInstance<ConsumerWithInjection>(); // Keyed instances var keyed1 = ServiceLocator.Current.GetInstance<ConsumerWithInjection>("key1"); var keyed2 = ServiceLocator.Current.GetInstance<ConsumerWithInjection>("key2"); var keyed3 = ServiceLocator.Current.GetInstance<ConsumerWithInjection>("key3"); // Get all the instances (four) var allInstances = ServiceLocator.Current.GetAllInstances<ConsumerWithInjection>();
Instead of registering a class and delegating the instance creation to the IOC container, it is also possible to register a factory expression. This delegate (usually expressed as a lambda expression) returns an instance. Because the delegate can contain any logic, it is possible to execute some complex creation code if needed, or even to return an instance that was created earlier in another part of the application. Note that the factory expression is only evaluated when needed.
The code in Figure 8, for example, shows how to register a DataItem that accepts a DateTime as a constructor parameter. Because the constructor is only executed when GetInstance is called (and not when Register is called), the parameter will accurately show the time at which the factory code was called the first time. Subsequent calls to GetInstance will, however, show the same time, because the instance was already created and cached.
public async void InitiateRegistration() { // Registering at 0:00:00 SimpleIoc.Default.Register(() => new DataItem(DateTime.Now)); Debug.WriteLine("Registering at " + DateTime.Now); await Task.Delay(5000); // Getting at 0:00:05 var item = ServiceLocator.Current.GetInstance<DataItem>(); Debug.WriteLine("Creating at " + item.CreationTime); await Task.Delay(5000); // Getting at 0:00:10. Creation time is still the same item = ServiceLocator.Current.GetInstance<DataItem>(); Debug.WriteLine("Still the same creation time: " + item.CreationTime); }
MVVM Light’s ViewModelLocator
When MVVM Light is installed (using the MSI at, under the Download section), a new application can be created in Visual Studio using the project templates provided. For instance, the following steps create a new Windows 8 (WinRT) app:
- Install MVVM Light.
- Using the Readme file that automatically opens in your favorite browser, install the VSIX file, which makes the project templates available in Visual Studio..
- Start (or restart) Visual Studio 2012.
- Select File, New Project.
- Under Visual C#, Windows Store, select the MvvmLight (Win8) project template, enter a name and location for the project and click OK.
Note that MVVM Light supports all XAML-based frameworks (Windows Presentation Foundation, Silverlight, Windows Phone, Windows 8), so the same experience can be reproduced with any of these frameworks. Once the project is created, open the file ViewModelLocator.cs in the ViewModel folder. Notice the code reproduced in Figure 9.>(); }
Because the MainViewModel expects an IDataService in its constructor, SimpleIoc will take care of creating and composing the service and the ViewModel on demand. A little farther along in the ViewModelLocator, the MainViewModel is exposed as a property, making it possible to data bind the MainPage’s DataContext and take advantage of the design-time service implementation (DesignDataService) or the run-time implementation (DataService). To visualize the difference, press Ctrl+F5 to run the application (in the simulator or on the local machine). The main page shows the text “Welcome to MVVM Light.” Then, in Visual Studio, right-click MainPage.xaml in Solution Explorer, and select either Open in Blend or View in Designer. The same UI is shown, this time with the text “Welcome to MVVM Light [design].”
The difference between run-time data and design-time data in this very simple, default application is made possible by the two implementations of IDataService, and the ViewModelBase.IsInDesignModeStatic property, triggering the correct service implementation to be registered. Once the MainViewModel is resolved for the first time, the IOC container is able to create the correct service, caching it and passing it to the MainViewModel constructor and then caching that instance.
Of course, the IOC container also supports unregistering a class. When some instances have already been created and cached, the Unregister method will remove those instances from the cache. If those instances are not referenced anywhere else in the application, they will be deleted by the garbage collector and memory will be reclaimed.
Dealing with View Services
This article already talked about services—that is, classes that provide the ViewModels with data and functionality (for example, to and from a Web service). Sometimes, however, the ViewModel also needs another kind of service in order to use functionality in the View. In this case, we talk about View Services.
Two typical View Services are the NavigationService and the DialogService. The first provides navigation functionality such as NavigateTo, GoBack, and so on. Windows Phone and Windows 8 Modern apps are almost always using a NavigationService because they are navigation applications using pages. The DialogService is very useful from a ViewModel because the developer doesn’t want to know how a message is going to be displayed to the user. The ViewModel merely provides the error message. The designer (or integrator) is in charge of displaying the message—for example, in a status bar or a custom dialog box. The DialogService typically offers functionality such as ShowStatus, ShowMessage, ShowError, and so on.
The implementation of NavigationService differs depending on the platform being used. On Windows 8 Modern apps, for example, the NavigationService can be a self-contained class (in a utility library), using the current window’s Frame to perform the actual navigation. This is, in fact, also how each Page performs navigation, by using its built-in NavigationService property. Of course, a ViewModel is a plain object so it does not have such a built-in property, and here again the IOC container comes handy: the NavigationService is registered in the ViewModelLocator, cached in the IOC container and can be injected into each ViewModel as needed. This operation is shown in Figure 10 (taken from the reference implementation provided with this article).
public class ViewModelLocator { static ViewModelLocator() { ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default); if (ViewModelBase.IsInDesignModeStatic) { SimpleIoc.Default.Register<IRssService, Design.DesignRssService>(); } else { SimpleIoc.Default.Register<IRssService, RssService>(); } SimpleIoc.Default.Register<INavigationService, NavigationService>(); SimpleIoc.Default.Register<MainViewModel>(); } public MainViewModel Main { get { return ServiceLocator.Current.GetInstance<MainViewModel>(); } } } public class MainViewModel : ViewModelBase { private readonly IRssService _rssService; private readonly INavigationService _navigationService; public ObservableCollection<RssItem> Items { get; private set; } public MainViewModel( IRssService rssService, INavigationService navigationService) { _rssService = rssService; _navigationService = navigationService; Items = new ObservableCollection<RssItem>(); } // ... }
Wrapping Up
This article is part of a series about MVVM in Windows 8 and MVVM Light. The next article will describe some of the challenges of a decoupled architecture, such as triggering navigation between pages, displaying dialog boxes to users and working with animations. In that article, I’ll cover solutions that involve MVVM Light’s Messenger component and the use of View Services.
References
The MVVM Light Toolkit is available at
Other Libraries:
- Unity (Microsoft Patterns and Practices):
- StructureMap:
- Castle Windsor:
- Common Service Locator:.
|
https://msdn.microsoft.com/en-us/magazine/jj991965.aspx
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
A bridge between UFOs and FontTools.
Project description, which work exactly the same way:
from defcon import Font from ufo2ft import compileOTF ufo = Font('MyFont-Regular.ufo') otf = compileOTF(ufo) otf.save('MyFont-Regular.otf')
In most cases, the behavior of ufo2ft should match that of ufo2fdk, whose documentation is retained below (and hopefully is still accurate).
Naming Data.
Additionally, if you have defined any naming data, or any data for that matter, in table definitions within your font’s features that data will be honored.
Feature generation
If your font’s features do not contain kerning/mark/mkmk features, ufo2ft will create them based on your font’s kerning/anchor data.
In addition to Adobe OpenType feature files, ufo2ft also supports the MTI/Monotype format. For example, a GPOS table in this format would be stored within the UFO at data/com.github.googlei18n.ufo2ft.mtiFeatures/GPOS.mti.
Fallbacks
Most of the fallbacks have static values. To see what is set for these, look at fontInfoData.py in the source code.
In some cases, the fallback values are dynamically generated from other data in the info object. These are handled internally with functions.
Merging TTX
If the UFO data directory has a com.github.fonttools.ttx folder with TTX files ending with .ttx, these will be merged in the generated font. The index TTX (generated when using using ttx -s) is not required.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ufo2ft/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Part 2: Layouts
Some of the concepts in this section will be easier to understand if you have read the Blaze documentation.
PurposePurpose
In general layouts are a way of applying a structure to a site beyond what you would want to have in one particular template, allowing you to share components and reduce repetition. This is something you might do in server-side includes in other languages/frameworks.
How Reaction uses layoutsHow Reaction uses layouts
Reaction uses one primary layout as the master or default called
coreLayout. This layout is just another React component. The code in this template (/imports/plugins/core/layout/client/components/coreLayout.js) is pretty minimal and you can see contains very little. So before jumping in to replace this you may want to ask yourself if this is what you actually need to do. But because we are changing the global structure of our site to accommodate our customised <main> section we need to.
/client/templates/layouts/core.js
import React, { Component } from "react"; import PropTypes from "prop-types"; import classnames from "classnames"; import Blaze from "meteor/gadicc:blaze-react-component"; import { Template } from "meteor/templating"; import { getComponent as assertComponent, registerComponent } from "/imports/plugins/core/components/lib"; class CoreLayoutBeesknees extends Component { static propTypes = { actionViewIsOpen: PropTypes.bool, data: PropTypes.object, structure: PropTypes.object } getComponent(name) { try { if (name) { return assertComponent(name); } } catch (error) { // No-op } return null; } renderMain() { const template = this.props.structure && this.props.structure.template; const mainComponent = this.getComponent(template); if (mainComponent) { return React.createElement(mainComponent, {}); } else if (Template[template]) { return ( <Blaze template={template} /> ); } return null; } render() { const { layoutHeader, layoutFooter, template } = this.props.structure || {}; const pageClassName = classnames({ "page": true, "show-settings": this.props.actionViewIsOpen }); const headerComponent = layoutHeader && this.getComponent(layoutHeader); const footerComponent = layoutFooter && this.getComponent(layoutFooter); return ( <div className={pageClassName} {headerComponent && React.createElement(headerComponent, {})} <Blaze template="cartDrawer" className="reaction-cart-drawer" /> <main> <div className="rui beesknees"> <div className="bkdebug"> <em>{"Bee's Knees layout"}</em> </div> <div className="bkdebug"> <em>{"layoutHeader component:"}</em> {this.props.structure.layoutHeader || "not applicable"} </div> <div className="bkdebug"> <em>{"layoutFooter component:"}</em> {this.props.structure.layoutFooter || "not applicable"} </div> <div className="bkdebug"> <em>main {this.getComponent(template) ? "component:" : "(Blaze template):"}</em> {template} </div> </div> { this.renderMain() } </main> {footerComponent && React.createElement(footerComponent, {})} </div> ); } } // Register component for it to be usable registerComponent("coreLayoutBeesknees", CoreLayoutBeesknees); export default CoreLayoutBeesknees;
In order to change our default layout, we need add a record to the registry for our package. We also need to add a special
defaults.js that will add some global options.
Note: If you just want to override the homepage and leave everything else alone, you can do that by adding special INDEX_OPTIONS parameters to this
defaults.js file. See the "How to create a custom homepage" documentation for more info.
First let's create our
defaults.js with our custom layout. You will place this file in the
client folder in your plugin. The
defaults.js just looks like this:
import { Session } from "meteor/session"; Session.set("DEFAULT_LAYOUT", "coreLayoutBeesknees");
In order for this file to take affect, we need to also import it. So we add it to our
index.js in your
client directory.
import "./defaults";
We also need to add our layout to the registry via our
register.js. We are going to add a
layout entry that looks like this:
layout: [{ layout: "coreLayoutBeesknees", workflow: "coreProductGridWorkflow", collection: "Products", theme: "default", enabled: true, structure: { template: "products", layoutHeader: "NavBar", layoutFooter: "Footer", notFound: "productNotFound", dashboardHeader: "", dashboardControls: "dashboardControls", dashboardHeaderControls: "", adminControlsFooter: "adminControlsFooter" } }]
so that our file will look like this /register.js
import { Reaction } from "/server/api"; // Register package as ReactionCommerce package Reaction.registerPackage({ label: "Bees Knees", name: "beesknees", icon: "fa fa-vine", autoEnable: true, layout: [{ layout: "coreLayoutBeesknees", workflow: "coreProductGridWorkflow", collection: "Products", theme: "default", enabled: true, structure: { template: "productsLanding", layoutHeader: "NavBar", layoutFooter: "Footer", notFound: "productNotFound", dashboardHeader: "", dashboardControls: "dashboardControls", dashboardHeaderControls: "", adminControlsFooter: "adminControlsFooter" } } ] });
You can see we specified several things there. The most important thing was the "layout" record, which refers to the new
layout template we will create in the next chapter. We also specify which templates we want for the header and footer (we are just keeping the default for now, which
are built-in React components called NavBar & Footer),
and what's the main template that we render and that's
products. We also
specified which template we would use for a "notFound". When we get to the routing and template more of this will make sense.
One important aspect is the casing of the properties within the
structure.
React component names start with capital letters, whereas Blaze templates
begin with a lower character. For now it's not possible to use React
components for properties that are expecting Blaze template names to be passed
(and vice versa). Though, in future all properties should designate React component names.
More detailed documentation on the other
register.js can be found in this blog post.
One important thing to understand is that at any point in time when Reaction goes to render a route/page, it's going to
determine how to pull the layout record from a key of
layout + workflow. The
coreWorkflow is a special case in that it is a workflow with just one step.
It is essentially the "default" workflow when you hit the home page.
Also note that:
- We have other parts that we could substitute without changing our layout. For example we change point our header or footer to a custom React component by changing the values for "layoutHeader" or "layoutFooter".
- There is a
priorityfield on layout objects (with a default value) of
999. When Reaction goes to render a route/page (as explained above) and more than one layout match is found, this
priorityfield is used to determine which one is used. Lower values override the default. See example.
Next: Customizing templates
|
https://docs.reactioncommerce.com/docs/next/plugin-layouts-3
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
models are equivalent but express things differently.”
This is about the distinction between Go channels and its cousin Erlang messages passing approach to communicating between processes.
Implementing Go channels with Elixir processes and messages
Before discovering that it has been done before, I wrote some simple Elixir code to play with the theoretical equivalence between those two models.
Notice that this implementation is not meant to be complete, nor efficient and is not recommended for production software. Don’t do it at home.
defmodule GoChannel do def make do spawn(&GoChannel.loop/0) end def write(channel, val) do send(channel, { :write, val}) end def read(channel) do send(channel, { :read, self }) receive do { :read, channel, val} -> val end end def loop do receive do { :read, caller } -> receive do { :write, val } -> send(caller, { :read, self, val }); loop end end end end
Some tests
defmodule GoChannelTest do use ExUnit.Case test "write and read to a channel" do channel = GoChannel.make GoChannel.write(channel, 'hello') assert GoChannel.read(channel) == 'hello' end test "write and read preserves order" do channel = GoChannel.make GoChannel.write(channel, 'hello') GoChannel.write(channel, 'world') assert GoChannel.read(channel) == 'hello' assert GoChannel.read(channel) == 'world' end end
This pseudo channel implementation relies on a combination of messages between processes to simulate the original FIFO behaviour of channels.
The same way one could pass a channel as parameter to other functions, since it’s a first-class citizen, we could pass the result of
GoChannel.make, since it’s a
PID, which in turn is a first-class citizen in Elixir.
Back to concurrency patterns
The first pattern demonstrated in Rob’s talk was
fanIn, where two channels were combined into a single one.
func fanIn(input1, input2 <-chan string) <-chan string { c := make(chan string) go func() { for { c <- <-input1 } }() go func() { for { c <- <-input2 } }() return c }
We could translate this code to Elixir, using our borrowed abstraction:
defmodule Patterns do def fan_in(chan1, chan2) do c = GoChannel.make spawn(loop(fn -> GoChannel.write(c, GoChannel.read(chan1)) end)) spawn(loop(fn -> GoChannel.write(c, GoChannel.read(chan2)) end)) c end defp loop(task) do fn -> task.(); loop(task) end end end
Some tests:
defmodule PatternsTest do use ExUnit.Case test "fan_in combines two channels into a single one" do chan1 = GoChannel.make chan2 = GoChannel.make c = Patterns.fan_in(chan1, chan2) GoChannel.write(chan1, 'hello') GoChannel.write(chan2, 'world') assert GoChannel.read(c) == 'hello' assert GoChannel.read(c) == 'world' end end
We could go even further in this mimic game and try to implement the
select statement, but that would be a very extensive one. First let’s reflect a little about composing more complex functionality with channels.
Channels as Streams
From a consumers perspective, reading from a channel is like getting values out of a stream. So, one could wrap a channel in a stream, using the
Stream.unfold/2 function:
def stream(channel) do Stream.unfold(channel, fn channel -> {read(channel), channel} end) end
This function returns a
Stream, which gives us lots of power to compose using its module functions like
map/2,
zip/2,
filter/2, and so on.
One test to demo that:
test "compose channel values with streams" do channel = GoChannel.make stream = GoChannel.stream(channel) GoChannel.write(channel, 1) GoChannel.write(channel, 2) GoChannel.write(channel, 3) doubles = Stream.map(stream, &(&1 * 2)) |> Stream.take(2) |> Enum.to_list assert doubles == [2, 4] end
Reviewing comparisons
The following quote from Rob Pike's talk is one common analogy used to compare channels and Erlang concurrency models:
“Rough analogy: writing to a file by name (process, Erlang) vs. writing to a file descriptor (channel, Go).”
I think analogies are really useful for communication but I believe they work better as the start of an explanation, not its summarization. So I think we could detail differences a little further.
For example,
PIDs are not like “file names” since they are anonymous and automatically generated. As we just saw,
PID is a first-class citizen and in the language’s perspective, is just as flexible as a channel.
I would say that channels abstraction reinforce isolation from producer to consumer, in the sense that Go routines writing to a channel doesn't know when nor who is going to consume that information. But it doesn't mean that using processes and messages one could not achieve the same level of isolation, as we just demoed.
On the other hand, identifying producers and consumers explicitly allow us to monitor and supervise them, allowing language like Erlang and Elixir to leverage the so-called supervision trees useful for building fault-tolerant software in those languages.
Besides been an interesting exercise to mimic Go’s way of solving problems, one should be aware that Erlang and Elixir have their own abstractions and patterns for handling concurrency.
For example, one could use the GenEvent module to implement a pub/sub functionality.
Elixir, Erlang and Go have some common goals, like the ones cited in the first paragraph of this post, but they also have their specifics. Embracing differences provides better results in the long term because it helps leverage each language power.
References
- Concurrency models and exception handling comparison
-
- Wikipedia entry comparing the two models
- Awesome presentation by Alexey Kachayev comparing even more languages
- Full Erlang channels implementation
- Rosetta code comparison between many synchronous concurrency styles
|
http://blog.plataformatec.com.br/2014/10/playing-with-elixir-and-go-concurrency-models/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
QVariant.canConvert inconsistencies
Hi there,
following minimized problem:
@
#include <QtCore/QCoreApplication>
#include <QVariant>
#include <QDebug>
#include <stdexcept>
template <typename Type>
Type get(QVariant& source)
{
if(source.canConvert<Type>())
return source.value<Type>();
else
throw std::runtime_error("Invalid type");
}
int main(int argc, char *argv[])
{
QVariant variant;
variant.setValue(QString("blabla"));
qDebug() << get<double>(variant); // "0"
}
@
Obviously, QVariant.canConvert returns true and the string "blabla" gets converted into a double with the value 0 (zero) as the application does not throw an exception. Storing and/or querying other types result in the same value, better say the same "problem" because it seems that nearly any type can be converted into any other type but the return value my be zero (I write nearly because we haven't checked all types)
Why?
And is there a workaround which gives us a template based posibility to check the stored type.
We used boost::lexical_cast before switching to QVariant as we through that QVariant would be the easier solution.
greets.
an ky
canConvert makes a static check, whether the type can be converted, not if the conversion can really be done. nearly everything can be converted to/from a string.
Hi Gerolf,
is there a way to do dynamic checks?
- koahnig Moderators
There is also "type": for checks available.
AFAIK, QVariant does not offer them.
You have to do them on your own.
But it will be tricky, becaus:
Is a date convertable to a long? yes --> seconds since..., or no ?
is a double convertable to an int, if it contains no int values, like 2.567 ?
|
https://forum.qt.io/topic/9458/qvariant-canconvert-inconsistencies
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Python Client Library
This document describes Python library that wraps YouTrack REST API.
Compatibility.
Installation
- Download the latest version of YouTrack Python Client library and unzip it.
- Install Python
Authenticating
from youtrack.connection import Connection connection = Connection('', 'xxx', 'xxx')
Get Issues
# get one issue connection.getIssue('SB-1') # get first 10 issues in project JT for query 'for: me #unresolved' connection.getIssues('JT', 'for: me #unresolved', 0, 10)
Create Issue
connection.createIssue('SB', 'resttest', 'Test issue', 'Test description', '2', 'Bug', 'First', 'Open', '', '', '')
Other Methods
See method of class Connection in
youtrack/connection.py
Last modified: 2 February 2017
|
https://www.jetbrains.com/help/youtrack/standalone/7.0/Python-Client-Library.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Published 11 months ago by artmonger
I'm passing two variables from the index page to the create page. The output inserts white-space into the characters of the variable if I concatenate the two variables in the controller. Other times, it only gives me the last character, or the first characters and cuts off the last one.
Here's the part on my index page:
<li> {{ $chicken->variety }} {{ $chicken->name }} <a href="{{ route('/chickens/{variety}{name}/create', ['variety' => $chicken->variety, 'name' => $chicken->name]) }}">create</a> </li>
Here's the Route:
Route::get('/chickens/{variety}{name}/create', [ 'as' => '/chickens/{variety}{name}/create', 'uses' => '[email protected]' ]);
With just one variable, it was working great. I could pass a breed name and I'd get it on the create page, such as "Create a new Sebright", etc.
However, I want to pass in a variety slug, such as "Create a new Golden Sebright" or "Create a new "Silver Sebright", etc.
Here's my controller method:
public function create($variety, $name) { $variety; $name; return view('chickens.create', compact('variety', 'name'); }
What do I get? Very strange things, like... "Create a New l" That's the last character in Aseel.
or if I click on an Ameraucana, I end up with "Create a New a" Again, the last letter.
I tried this, $breed = $variety. ' ' .$name; and then passing $breed to compact()...
But I get thinks like "American Buf f". Note, the space between the f's.
dd($breed); and I get two strings... One is the first word, the second is the last character of the breed name.
I've tried using with(); but I still find problems.
What do you think?
You should probably separate both variables in the url with a slash.
Also, you don't have to repeat the route again in the
'as'. That parameter is for naming the route.
// Like this Route::get('/chickens/{variety}/{name}/create', [ 'as' => 'chickens.create', 'uses' => '[email protected]' ]); // Or this if you have an up to date version of Laravel Route::get('/chickens/{variety}/{name}/create', '[email protected]')->name('chickens.create'); // The name isn't mandatory
I tried this:
Route::get('/chickens/{variety}/{name}/create', '[email protected]');
and I changed my route method to,
but I'm getting an error Exception,
Route[/chickens/{variety}/{name}/create] not defined... How is that possible? The strings, match.
The route() method requires named routing. when I use named routing, I get my page dumping all my chickens. But when I select the create link I only get the last letter of the breed name. That's just strange.
Yeah something is going wrong somewhere.
Can you show me the controller and view please.
Also, if you can show me where you're passing the parameters for the URL (e.g: a form or button).
Alternatively, what you could do is just have a generic
/chickens/create route and have a couple of select boxes which a user selects for the variety and name.
Named routing should have nothing to do with it.
Please can you use three backticks ``` before and after your code blocks so we can actually see what you are doing.
Please list your routes and your controller.
I came up with another solution:
!
Your original problem was with this line here:
<a href="{{ route('/chickens/{variety}{name}/create', ['variety' => $chicken->variety, 'name' => $chicken->name]) }}">create</a>.
When you use the
route() method, you give it a name instead of the URL. You were passing the whole route AND the wildcards which is a no no and a recipe for disaster (which happened).
When you use the
as key on the route or when you chain on
->name('some.name'); is when a route becomes a named route.
So for example, to use the
route() method:
Route::get('/', '[email protected]')->name('home'); Route::get('/', ['uses' => '[email protected]', 'as' => 'home']); // both of them are the same // Then i'd do something like <a href="{{ route('home') }}">a link</a> // if i need to pass one parameter, i would pass it as the second argument <a href="{{ route('home', $user->username) }}">a link</a> // if i need to pass more than one parameter, i would pass an array as the second argument <a href="{{ route('home', [$user->username, $article->slug]) }}">a link</a> // You can also do route model binding where you just pass the whole object to the second argument <a href="{{ route('home', $user) }}">a link</a>
P.S: you don't need to declare arguments from a function/method within the function/method.
This is what you were doing:
public function create($variety, $name) { $variety; $name; return view('chickens.create', compact('variety','name')); }
You can just do:
public function create($variety, $name) { return view('chickens.create', compact('variety','name')); }
Some good feedback there. Also, to understand 'strange String Phenomena'
You were hoping to set '/chickens/{variety}{name}/create' as your route
You have two variables next to each other with no means to separate them
So, if you requested this route;
Create a new Golden Sebright Laravel router its trying to assign some part of the string to
variety and some to
name with no way to delimit them. It might do
$variety = 'Create a new Golden Sebrigh' and
$name='t'
or
$variety = 'Create a new G' and
$name='olden Sebright'
or
$variety = 'C' and
$name='reate a new Golden Sebright'
Laravel has no way of knowing what was in your head about where the split should be.
Assume for some SEO reason you want to pass a 'slug' to the controller then just do that but you will need to split it down again which you can do in your route.
@foreach($chickens as $chicken) <li> @if($chicken->is_bantam == true) <label>bantam</label> @endif {{ $chicken->variety }} {{ $chicken->name }} <a href="{{ route('chickens.create', $chicken->variety."-" .$chicken->name) }}">create</a> </li> @endforeach </ul>
Route should be like;
Route::get('/chickens/{variety}-{name}/create', [ 'as' => 'chickens.create', 'uses' => '[email protected]' ]);
Note that the delimiter is in the route, separating the parts.
Then in controller you will get both variables passed
public function create($variety, $name) {
Please sign in or create an account to participate in this conversation.
|
https://laracasts.com/discuss/channels/general-discussion/mysterious-string-phenomena
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Dependency injection with Guice
Testable code with less boilerplate
Gu.
Guice 2.0 beta
As I write this, the Guice team is working hard on Guice 2.0 and expects to release before the end of 2008. An early beta is posted on the Google Code download site (see Related topics)..
The case for DI
public class FrogMan { private FrogMobile vehicle = new FrogMobile(); public FrogMan() {} // crime fighting logic goes here... } public class FrogManTest extends TestCase { public void testFrogManFightsCrime() { FrogMan hero = new FrogMan(); hero.fightCrime(); //make some assertions... } }
All seems well until I try running the test, whereupon I get the exception in Listing 2:
Listing 2. Dependencies can be troublesome
java.lang.RuntimeException: Refinery startup failure. at HeavyWaterRefinery.<init>(HeavyWaterRefinery.java:6) at FrogMobile.<init>(FrogMobile.java:5) at FrogMan.<init>(FrogMan.java:8) at FrogManTest.testFrogManFightsCrime(FrogManTest.java:10).
Enter DI
To avoid this problem, you can create an interface (for example,
Vehicle) and
have your
FrogMan class accept the
Vehicle as a constructor
argument, as in Listing 3:
Listing 3. Depend on interfaces, and have them injected
public class FrogMan { private Vehicle vehicle; public FrogMan(Vehicle vehicle) { this.vehicle = vehicle; } // crime fighting logic goes here... }
This idiom is the essence of DI — have your classes accept their dependencies through references to interfaces instead of constructing them (or using static references). Listing 4 shows how DI makes your test easier:
Listing 4. Your test can use mocks instead of troublesome dependencies
static class MockVehicle implements Vehicle { boolean didZoom; public String zoom() { this.didZoom = true; return "Mock Vehicle Zoomed."; } } public void testFrogManFightsCrime() { MockVehicle mockVehicle = new MockVehicle(); FrogMan hero = new FrogMan(mockVehicle); hero.fightCrime(); assertTrue(mockVehicle.didZoom); // other assertions } has been
@Injected
@Inject public FrogMan(Vehicle vehicle) { this.vehicle = vehicle; } binds
Vehicle to
FrogMobile
public class HeroModule implements Module { public void configure(Binder binder) { binder.bind(Vehicle.class).to(FrogMobile.class); } }
A module is an interface with a single method. The
Binder that Guice passes to
your module lets you tell Guice how you want your objects constructed. The binder API forms
a domain-specific language (see Related topics).
public class Adventure { public static void main(String[] args){ Injector injector = Guice.createInjector(new HeroModule()); FrogMan hero = injector.getInstance(FrogMan.class); hero.fightCrime(); } } Related topics). This will seem disconcerting at first, but after a while you'll get used
to it. As an example, Listing 8 shows the
FrogMobile having its
FuelSource injected:
Listing 8. The
FrogMobile accepting a
FuelSource
@Inject public FrogMobile(FuelSource fuelSource){ this.fuelSource = Related topics.)
Other forms of injection
public class FrogMan{ private Vehicle vehicle; @Inject public void setVehicle(Vehicle vehicle) { this.vehicle = vehicle; } //etc. ...
public class FrogMan { @Inject private Vehicle vehicle; public FrogMan(){} //etc. ...
Again, all Guice cares about is the
@Inject annotation. It finds any fields
you annotate and tries to inject the appropriate dependency.
Which one is best?
@Inject public WeaselGirl(@Fast Vehicle vehicle) { this.vehicle = vehicle; }
In Listing 12, the
HeroModule uses the binder to tell Guice that the
WeaselCopter is "fast":
Listing 12. Tell Guice about your annotation in your
Module
public class HeroModule implements Module { public void configure(Binder binder) { binder.bind(Vehicle.class).to(FrogMobile.class); binder.bind(Vehicle.class).annotatedWith(Fast.class).to(WeaselCopter.class); } }
@Retention(RetentionPolicy.RUNTIME) @Target({ElementType.FIELD, ElementType.PARAMETER}) @BindingAnnotation public @interface Fast {} instead of a custom
annotation
// in WeaselGirl @Inject public WeaselGirl(@Named("Fast") Vehicle vehicle) { //... } // in HeroModule binder.bind(Vehicle.class) .annotatedWith(Names.named("Fast")).to(WeaselCopter.class);.
Provider methods
@Provides private Hero provideHero(FrogMan frogMan, WeaselGirl weaselGirl) { if (Math.random() > .5) { return frogMan; } return weaselGirl; }
public class Saga { private final Provider<Hero> heroProvider; @Inject public Saga(Provider<Hero> heroProvider) { this.heroProvider = heroProvider; } public void start() throws IOException { for (int i = 0; i < 3; i++) { Hero hero = heroProvider.get(); hero.fightCrime(); } } } instead of
the dependency
@Inject public FrogMan(Provider<Vehicle> vehicleProvider) { this.vehicle = vehicleProvider.get(); }
(Note that you didn't have to change the module code at all.) This rewrite doesn't serve
any purpose; it just illustrates that you can always ask for a
Provider instead
of the dependency directly.
Scopes bound as a
singleton
public class HeroModule implements Module { public void configure(Binder binder) { //... binder.bind(FuelSource.class) .to(HeavyWaterRefinery.class).in(Scopes
@Singleton public class HeavyWaterRefinery implements FuelSource {...}
public class HeavyWaterRefinery implements FuelSource { @Inject public HeavyWaterRefinery(@Named("LicenseKey") String key) {...} } // in HeroModule: binder.bind(String.class) .annotatedWith(Names.named("LicenseKey")).toInstance("QWERTY");
//In HeroModule: private void loadProperties(Binder binder) { InputStream stream = HeroModule.class.getResourceAsStream("/app.properties"); Properties appProperties = new Properties(); try { appProperties.load(stream); Names.bindProperties(binder, appProperties); } catch (IOException e) { // This is the preferred way to tell Guice something went wrong binder.addError(e); } } //In the file app.properties: LicenseKey=QWERTY1234.
What comes next? Related topics. Related topics?
Downloadable resources
- PDF of this content
- Java files for this article (j-guice.zip | 19KB)
Related topics
- Guice: The Guice home page is a great starting point..
- Dependency injection and domain specific languages: Martin Fowler is always a terrific resource. Check out his site's coverage of these topics.
- Read about the history of, and interrelationships among, DI frameworks such as Spring and HiveMind.
- Guice Best Practices: Be sure to check out this page in the Guice Wiki.
-.
- developerWorks Java technology zone: Find hundreds of articles about every aspect of Java programming, including a number of other articles on Dependency Injection
- Guice: Download the Guice 2 beta release.
|
https://www.ibm.com/developerworks/java/library/j-guice/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Introduction :-We have already done insert,update and delete operations with SQL Server. Now we learn "How to Insert data in Microsoft Access database ". These two database are Microsoft products so we have to know about its connectivity.These two are more compatible with .NET Framework. These are more reliable and secure database.But Nowadays Mysql and Oracle database are mostly used in the world. These are also more reliable and secure like sql server. Here i will insert some values in MS Access database and bind its data with gridview control. You can easily perform update and delete operations also but here i will perform only insert operation.
There are lots of database used for data storing purpose.Microsoft Access 2010 is one of them. The Microsoft provide two database for storage purpose
There are lots of database used for data storing purpose.Microsoft Access 2010 is one of them. The Microsoft provide two database for storage purpose
- Microsoft SQL Server Management Studio
- Microsoft Access
There are some steps to implement this concepts in ASP.NET Application as given below:-
Step 1 :- First Open your visual studio --> File--> New --> website -->Select ASP.NET Empty Website-->OK --> Add New Web form (Default.aspx) -->Drag and Drop Label ,TextBox, Button and Gridview controls on the page from ToolBox as shown below:-
Step 2 :- Now create table (student) in Microsoft Access database with following fields as shown below:-
Note :-
If are facing problem to create table in MS Access database then see the below link:-
Step 3 :- Now write the c# codes on each button (Submit and Display)'s click ( Default.aspx.cs ) as given below:-
using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.OleDb; using System.Data; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void Button("insert into student values(@a,@b,@c,@d)", my_con); o_cmd.Parameters.AddWithValue("a",TextBox1.Text); o_cmd.Parameters.AddWithValue("b", TextBox2.Text); o_cmd.Parameters.AddWithValue("c", TextBox3.Text); o_cmd.Parameters.AddWithValue("d", TextBox4.Text); int i = o_cmd.ExecuteNonQuery(); if (i > 0) { Label1.Text = "Data inserted successfully"; } my_con.Close(); } protected void Button("select * from student", my_con); OleDbDataAdapter da = new OleDbDataAdapter(o_cmd); DataTable dt = new DataTable(); da.Fill(dt); GridView1.DataSource = dt; GridView1.DataBind(); } }
Note :-
- Here you have to provide full path of your database file.By default ,this database is stored in Your Computer's My Documents Folder.
- You can change database path in any drive.
- You can perform delete and update operations with Ms Access Database also.
Step 5 :-Now open your Student table in MS Access database--> You will see that data is inserted in table (student) as shown below:-
For More...
- How to Run C# Program on NotePad easily
- How to use WCF Services in asp.net
- Form based authentication in asp.net
- How to create HTTP Handler in asp.net
- How implement validation controls in asp.net
- How to implement cookie concepts in asp.net website
Download Whole Attached File
Website Access Database
Awesome post :)
how to export data from database to excel in asp.net
|
http://www.msdotnet.co.in/2015/01/how-to-insert-data-in-microsoft-access.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Murl::Graph::MultiMaterial Class Reference
The MultiMaterial node class. More...
#include "murl_graph_multi_material.h"
Inheritance diagram for Murl::Graph::MultiMaterial:
Detailed Description
The MultiMaterial node class.
XML Elements
- XML Graph Node Attributes:
A material node ID to include at the given index N. See Murl::Graph::IMaterial::GetSubMaterialNodeTarget().
A comma-separated string of individual material node IDs to group together. See Murl::Graph::IMaterial::GetSubMaterialNodeTarget().
The slot index to which the material gets temporarily assigned during traversal of its children, in the range from 0 to Murl::IEnums::NUM_MATERIAL_SLOTS. See Murl::Graph::IStateSlot::SetSlot().
The documentation for this class was generated from the following file:
- murl_graph_multi_material.h
|
https://murlengine.com/api/en/class_murl_1_1_graph_1_1_multi_material.php
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
TruWeb script structure
This topic describes the TruWeb script structure.
You write TruWeb scripts using the TruWeb JavaScript SDK. You can write standard JavaScript code as part of your script, or reuse any other pure (for example, third party) JavaScript file.
You can use a variety of IDEs or text editors to build and test your TruWeb script. For more information, see Build TruWeb scripts.
User authentication
Note: Digest and NTLM authentication are supported from version 2018.11.4.
TruWeb supports the following types of user authentication:
Basic
NTLM
Digest.
TruWeb script files
The TruWeb script folder includes the following files:
Tip: A template script folder, EmptyScript, is available in <TruWeb_installation folder>\examples.
main.js. Contains the code of the script.
scenario.yml. Responsible for the TruWeb scenario settings when running a TruWeb script in load mode. For details, see Run TruWeb scripts.
Note: This file is used for the standalone version of TruWeb only. When running a test in LoadRunner or Performance Center,
rts.yml (optional). TruWeb is installed with default runtime settings for TruWeb test runs. The runtime settings are specific to each Vuser, such as logger or proxy settings. You can see descriptions of the runtime settings alongside each setting. This file is called rts.yml, and resides in the TruWeb installation folder.
If you want to customize these settings, you can do so in a local runtime settings file that you create in the script's folder. When the script runs, the customized runtime settings take precedence over the default runtime settings. To customize:
- Copy rts.template.yml from your TruWeb installation folder to your script's folder, and rename it rts.yml.
- Customize the copied file and save your changes.
parameters.yml (optional). Contains parameters for your script. This file is optional and is user-defined in the script's folder.
The parameters file defines the parameters that can be used in the script and various aspects of how and when new values are retrieved.
Example of a parameters.yml file:
parameters: #The parameters header, it must be the first line of the file - name: myParam1 #The name of the parameter that can later be used in the code type: csv #The type of the data source of the parameter. The valid values are: csv fileName: myParam1.csv #The name of the file that has all the values for this parameter columnName: Column1 #The column name used to draw values for this parameter nextValue: iteration #The logic used to know when to retrieve new values nextRow: sequential #The logic used to know how to retrieve new values onEnd: loop #What happens when there are no more values remaining and the script needs another value - name: myParam2 #Another parameter... type: csv fileName: myParam1.csv columnName: Column2 nextValue: iteration nextRow: sequential onEnd: loop - name: myParam3 //Another parameter... type: csv fileName: myParam1.csv columnName: Column3 nextValue: iteration nextRow: same as myParam2 onEnd: loop - name: myParam4 //Unique parameter... type: csv fileName: myParam1.csv columnName: Column4
nextValue: always nextRow: unique blockSize: 5 onEnd: loop
transactions.yml (optional). For StormRunner Load users only.
To enable the SLA feature of StormRunner Load, populate this file with a list of transaction names for which to calculate the SLA.
Note: The script must contain transactions with matching names
Format the list as shown in the following example, where two transactions are defined, named foo and bar:
- name: foo
- name: bar
user_args.json (optional)
Note: This file is supported from version 2019.2.4.
The User Arguments file contains arguments equivalent to the custom command line options used for LoadRunner scripts. The arguments are used in the TruWeb script for TruWeb execution.
For API properties, see config in the SDK documentation.
The file is in a key-value JSON format:
{ "key": "value", "hello": "world" }
main.js layout
This section describes the layout of the main.js file.
main.js can contain three types of sections: initialize, action, and finalize. These can appear more than once and are executed in the order that they appear in the script.
The following is an example of a typical Tru.
Note:
- All the elements of the SDK are within the load namespace (object), therefore the load. prefix is needed to access them.
- Tru.
Third party modules and libraries
You can load third party Common.js-type modules via the require() function.
Load an external module in one of the following ways:
See also:
|
https://admhelp.microfocus.com/truweb/en/latest/help/Content/TruWeb/TW-scripts.htm
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Unit Testing ASP.NET Web API 2
Download Completed Project
This guidance and application demonstrate how to create simple unit tests for your Web API 2 application. This tutorial shows how to include a unit test project in your solution, and write test methods that check the returned values from a controller method.
This tutorial assumes you are familiar with the basic concepts of ASP.NET Web API. For an introductory tutorial, see Getting Started with ASP.NET Web API 2.
The unit tests in this topic are intentionally limited to simple data scenarios. For unit testing more advanced data scenarios, see Mocking Entity Framework when Unit Testing ASP.NET Web API 2.
Software versions used in the tutorial
- Visual Studio 2017
- Web API 2
In this topic
This topic contains the following sections:
- Prerequisites
- Download code
- Create application with unit test project
- Set up the Web API 2 application
- Install NuGet packages in test project
- Create tests
- Run tests
Prerequisites
Visual Studio 2017 Community, Professional or Enterprise edition
Download code
Download the completed project. The downloadable project includes unit test code for this topic and for the Mocking Entity Framework when Unit Testing ASP.NET Web API topic.
Create application with unit test project
You can either create a unit test project when creating your application or add a unit test project to an existing application. This tutorial shows both methods for creating a unit test project. To follow this tutorial, you can use either approach.
Add unit test project when creating the application
Create a new ASP.NET Web Application named StoreApp.
In the New ASP.NET Project windows, select the Empty template and add folders and core references for Web API. Select the Add unit tests option. The unit test project is automatically named StoreApp.Tests. You can keep this name.
After creating the application, you will see it contains two projects.
Add unit test project to an existing application
If you did not create the unit test project when you created your application, you can add one at any time. For example, suppose you already have an application named StoreApp, and you want to add unit tests. To add a unit test project, right-click your solution and select Add and New Project.
Select Test in the left pane, and select Unit Test Project for the project type. Name the project StoreApp.Tests.
You will see the unit test project in your solution.
In the unit test project, add a project reference to the original project.
Set up the Web API 2 application
In your StoreApp project, add a class file to the Models folder named Product.cs. Replace the contents of the file with the following code.
using System; namespace StoreApp.Models { public class Product { public int Id { get; set; } public string Name { get; set; } public decimal Price { get; set; } } }
Build the solution.
Right-click the Controllers folder and select Add and New Scaffolded Item. Select Web API 2 Controller - Empty.
Set the controller name to SimpleProductController, and click Add.
Replace the existing code with the following code. To simplify this example, the data is stored in a list rather than a database. The list defined in this class represents the production data. Notice that the controller includes a constructor that takes as a parameter a list of Product objects. This constructor enables you to pass test data when unit testing. The controller also includes two async methods to illustrate unit testing asynchronous methods. These async methods were implemented by calling Task.FromResult to minimize extraneous code, but normally the methods would include resource-intensive operations.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Web.Http; using StoreApp.Models; namespace StoreApp.Controllers { public class SimpleProductController : ApiController { List<Product> products = new List<Product>(); public SimpleProductController() { } public SimpleProductController(List<Product> products) { this.products = products; } public IEnumerable<Product> GetAllProducts() { return products; } public async Task<IEnumerable<Product>> GetAllProductsAsync() { return await Task.FromResult(GetAllProducts()); } public IHttpActionResult GetProduct(int id) { var product = products.FirstOrDefault((p) => p.Id == id); if (product == null) { return NotFound(); } return Ok(product); } public async Task<IHttpActionResult> GetProductAsync(int id) { return await Task.FromResult(GetProduct(id)); } } }
The GetProduct method returns an instance of the IHttpActionResult interface. IHttpActionResult is one of the new features in Web API 2, and it simplifies unit test development. Classes that implement the IHttpActionResult interface are found in the System.Web.Http.Results namespace. These classes represent possible responses from an action request, and they correspond to HTTP status codes.
Build the solution.
You are now ready to set up the test project.
Install NuGet packages in test project
When you use the Empty template to create an application, the unit test project (StoreApp.Tests) does not include any installed NuGet packages. Other templates, such as the Web API template, include some NuGet packages in the unit test project. For this tutorial, you must include the Microsoft ASP.NET Web API 2 Core package to the test project.
Right-click the StoreApp.Tests project and select Manage NuGet Packages. You must select the StoreApp.Tests project to add the packages to that project.
Find and install Microsoft ASP.NET Web API 2 Core package.
Close the Manage NuGet Packages window.
Create tests
By default, your test project includes an empty test file named UnitTest1.cs. This file shows the attributes you use to create test methods. For your unit tests, you can either use this file or create your own file.
For this tutorial, you will create your own test class. You can delete the UnitTest1.cs file. Add a class named TestSimpleProductController.cs, and replace the code with the following code.
using System; using System.Collections.Generic; using System.Threading.Tasks; using System.Web.Http.Results; using Microsoft.VisualStudio.TestTools.UnitTesting; using StoreApp.Controllers; using StoreApp.Models; namespace StoreApp.Tests { [TestClass] public class TestSimpleProductController { [TestMethod] public void GetAllProducts_ShouldReturnAllProducts() { var testProducts = GetTestProducts(); var controller = new SimpleProductController(testProducts); var result = controller.GetAllProducts() as List<Product>; Assert.AreEqual(testProducts.Count, result.Count); } [TestMethod] public async Task GetAllProductsAsync_ShouldReturnAllProducts() { var testProducts = GetTestProducts(); var controller = new SimpleProductController(testProducts); var result = await controller.GetAllProductsAsync() as List<Product>; Assert.AreEqual(testProducts.Count, result.Count); } [TestMethod] public void GetProduct_ShouldReturnCorrectProduct() { var testProducts = GetTestProducts(); var controller = new SimpleProductController(testProducts); var result = controller.GetProduct(4) as OkNegotiatedContentResult<Product>; Assert.IsNotNull(result); Assert.AreEqual(testProducts[3].Name, result.Content.Name); } [TestMethod] public async Task GetProductAsync_ShouldReturnCorrectProduct() { var testProducts = GetTestProducts(); var controller = new SimpleProductController(testProducts); var result = await controller.GetProductAsync(4) as OkNegotiatedContentResult<Product>; Assert.IsNotNull(result); Assert.AreEqual(testProducts[3].Name, result.Content.Name); } [TestMethod] public void GetProduct_ShouldNotFindProduct() { var controller = new SimpleProductController(GetTestProducts()); var result = controller.GetProduct(999); Assert.IsInstanceOfType(result, typeof(NotFoundResult)); } private List<Product> GetTestProducts() { var testProducts = new List<Product>(); testProducts.Add(new Product { Id = 1, Name = "Demo1", Price = 1 }); testProducts.Add(new Product { Id = 2, Name = "Demo2", Price = 3.75M }); testProducts.Add(new Product { Id = 3, Name = "Demo3", Price = 16.99M }); testProducts.Add(new Product { Id = 4, Name = "Demo4", Price = 11.00M }); return testProducts; } } }
Run tests
You are now ready to run the tests. All of the method that are marked with the TestMethod attribute will be tested. From the Test menu item, run the tests.
Open the Test Explorer window, and notice the results of the tests.
Summary
You have completed this tutorial. The data in this tutorial was intentionally simplified to focus on unit testing conditions. For unit testing more advanced data scenarios, see Mocking Entity Framework when Unit Testing ASP.NET Web API 2.
|
https://docs.microsoft.com/en-us/aspnet/web-api/overview/testing-and-debugging/unit-testing-with-aspnet-web-api
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
I'm trying to extract text between tags of a HTML page using a keyword. Here is an example.
<div class="xyz">Title</div>
<h4>Education</h4>
<p>PhD, 2017, Subject,<br />
ABC University </p>
r = requests.get(site)
soup = BeautifulSoup(r.content, "lxml")
for elems in soup(text=re.compile('PhD')):
val = elems.find_parent('p').getText()
You can try to use
lxml.html to get desired text:
import lxml.html as html source = requests.get(site).content html_obj = html.fromstring(source) my_text = " ".join([text.strip() for text in html_obj.xpath('//h4[.="Education"]/following-sibling::p/text()')]) print(my_text)
Output
'PhD, 2017, Subject, ABC University'
|
https://codedump.io/share/awmkx1a90VJ3/1/extracting-text-between-tags-using-a-particular-word
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Splitting Datasets into Training/Testing/Validating¶
This example shows how to split a single dataset into two datasets, one used for training and the other used for testing.
Note that when splitting frames, H2O does not give an exact split. It’s designed to be efficient on big data using a probabilistic splitting method rather than an exact split. For example, when specifying a 0.75/0.25 split, H2O will produce a test/train split with an expected value of 0.75/0.25 rather than exactly 0.75/0.25. On small datasets, the sizes of the resulting splits will deviate from the expected value more than on big data, where they will be very close to exact.
library(h2o) h2o.init() # Import the prostate dataset prostate.hex <- h2o.importFile(path = "", destination_frame = "prostate.hex") print(dim(prostate.hex)) [1] 380 9 # Split dataset giving the training dataset 75% of the data prostate.split <- h2o.splitFrame(data=prostate.hex, ratios=0.75) print(dim(prostate.split[[1]])) [1] 291 9 print(dim(prostate.split[[2]])) [1] 89 9 # Create a training set from the 1st dataset in the split prostate.train <- prostate.split[[1]] # Create a testing set from the 2nd dataset in the split prostate.test <- prostate.split[[2]] # Generate a GLM model using the training dataset. x represesnts the predictor column, and y represents the target index. prostate.glm <- h2o.glm(y = "CAPSULE", x = c("AGE", "RACE", "PSA", "DCAPS"), training_frame=prostate.train, family="binomial", nfolds=10, alpha=0.5) # Predict using the GLM model and the testing dataset pred = h2o.predict(object=prostate.glm, newdata=prostate.test) # View a summary of the prediction with a probability of TRUE summary(pred$p1, exact_quantiles=TRUE) p1 Min. :0.1560 1st Qu.:0.2954 Median :0.3535 Mean :0.4111 3rd Qu.:0.4369 Max. :0.9989
import h2o from h2o.estimators.glm import H2OGeneralizedLinearEstimator h2o.init() # Import the prostate dataset prostate = "" prostate_df = h2o.import_file(path=prostate) # Split the data into Train/Test/Validation with Train having 70% and test and validation 15% each train,test,valid = prostate_df.split_frame(ratios=[.7, .15]) # Generate a GLM model using the training dataset glm_classifier = H2OGeneralizedLinearEstimator(family="binomial", nfolds=10, alpha=0.5) glm_classifier.train(y="CAPSULE", x=["AGE", "RACE", "PSA", "DCAPS"], training_frame=train) # Predict using the GLM model and the testing dataset predict = glm_classifier.predict(test) # View a summary of the prediction predict.head() predict p0 p1 --------- -------- -------- 1 0.366189 0.633811 1 0.351269 0.648731 1 0.69012 0.30988 0 0.762335 0.237665 1 0.680127 0.319873 1 0.687736 0.312264 1 0.676753 0.323247 1 0.685876 0.314124 1 0.707027 0.292973 0 0.74706 0.25294 [10 rows x 3 columns]
|
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-munging/splitting-datasets.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Here is my code. I'm not exactly sure if I need a counter for this to work. The answer should be
'iiii'.
def eliminate_consonants(x): vowels= ['a','e','i','o','u'] vowels_found = 0 for char in x: if char == vowels: print(char) eliminate_consonants('mississippi')
The line
if char == vowels: is wrong. It has to be
if char in vowels:. This is because you need to check if that particular character is present in the list of vowels. Apart from that you need to
print(char,end = '') (in python3) to print the output as
iiii all in one line.
The final program will be like
def eliminate_consonants(x): vowels= ['a','e','i','o','u'] for char in x: if char in vowels: print(char,end = "") eliminate_consonants('mississippi')
And the output will be
iiii
Note A faster way
def eliminate_consonants(x): for char in x: if char in 'aeiou': print(char,end = "")
As simple as it looks, the statement
if char in 'aeiou' checks if
char is present in the string
aeiou.
The fastest method as mentioned below in comments would be
''.join(c for c in x if c in 'aeiou')
|
http://databasefaq.com/index.php/answer/126196/python-python-3x-python-idle-vowel-print-vowels-in-string-python
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Keeping code compatible with Pythons 2 and 3¶
DIPY supports Python versions from 2.6 to 3.5. In order to maintain code that supports both Python 2 and Python 3 versions, please follow these instructions.
There is useful advice here:
-
-
-
Future imports¶
For any modules with print statements, and for any modules where you remember, please put:
from __future__ import division, print_function, absolute_import
As the first code line of the file, to use Python 3 behavior by default.
Print¶
In Python 3,
__future__ import above,
and the function form:
print(something), whenever
Division¶
In Python 2, integer division returns integers, while in Python 3
3/2
returns
1.5 not
1. It’s very good to remember to put the
__future__
import above at the top of the file to make this default everywhere.
Moved modules¶
There are compatibility routines in
dipy.utils.six. You can often get
modules that have moved between the versions with (e.g.):
from dipy.utils.six.moves import configparser
See the
six.py code and the six.py docs.
Range, xrange¶
range returns an iterator in Python3, and
xrange is therefore redundant,
and it has been removed. Get
xrange for Python 2,
range for Python 3
with:
from dipy.utils.six.moves import xrange
Or you might want to stick to
range for Python 2 and Python 3, especially
for small lists where the memory benefit for
xrange is small.
Because
range returns an iterator for Python 3, you may need to wrap some
calls to range with
list(range(N)) to make the code compatible with Python 2
and Python 3.
Reduce¶
Python 3 removed
reduce from the builtin namespace, this import works for
both Python 2 and Python 3:
from functools import reduce
Strings¶
The major difference between Python 2 and Python 3 is the string handling.
Strings (
str) are always unicode, and so:
my_str = 'A string'
in Python 3 will result in a unicode string. You also need to be much more
explicit when opening files; If you want bytes, use:
open(fname, "rb"). If
you want unicode:
open(fname, "rt"). In the same way you need to be explicit if
you want
import io; io.StringIO or
io.BytesIO for your file-like objects
containing strings or bytes.
basestring has been removed in Python 3. To test whether something is a
string, use:
from dipy.utils.six import string_types isinstance(a_variable, string_types)
Next function¶
In versions of Python from 2.6 and on there is a function
next in the
builtin namespace, that returns the next result from an iterable thing. In
Python 3, meanwhile, the
.next() method on generators has gone, replaced by
.__next__(). So, prefer
next(obj) to
obj.next() for generators, and
in general when getting the next thing from an iterable.
Except¶
You can’t get away with
except ValueError, err now, because that raises a
syntax error for Python 3. Use
except ValueError as err instead.
|
http://nipy.org/dipy/devel/python3.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Is Go an Object Oriented language?
The first technical article is dedicated to a slightly opinionated topic but an important one. Soon after you write your first Go program, you will start thinking about how to organise your code. Should I write a Function? Should I create a new Struct?
Eventually it all comes to the same question.
Is Go an object oriented language?.
Source:
The answer is, like to everything in life: “it depends”.
Today’s article is presented by GophersLand citizen: Lukas [twitter].
About Author
I was doing PHP/Java for 7 years by now. I developed a bidding applications handling 100s of millions of bids from advertisers such as Booking.com or Expedia. I created a poker social network TiltBook and even published a Udemy course on “Object Oriented Programming”!
About OOP
Before we are able to decide if Go is OO or not, we must first define an OO language.
When I started writing this article, I defined OOP based on the Wikipedia definition:
- is a programming paradigm based on the concept of “objects”, which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods
- an Object’s procedures can access and often modify the attributes of the object with which they are associated
- an Object’s internal state is protected from outside world (encapsulated) leveraging private/protected/public visibility of attributes and methods
- an Object is frequently defined in OO languages as an instance of a Class
The above concept properties are implemented in most popular OO languages, Java and C++ by mechanics such as:
- Encapsulation (possible on package level in Go)
- Composition (possible through embedding in Go)
- Polymorphism (possible through Interface satisfaction in Go. Type satisfies Interface without manually implementing it if it defines all the Interface methods. Since almost anything can have methods attached, even primitive types such as Int, almost anything can satisfy an interface)
- Inheritance (Go does not provide the typical, type-driven notion of subclassing because it’s fragile and considered a bad practice, inferior to Composition)
OOP Original Conception
After I published the article, I realised the concept of Objects from historical perspective is extremely complex and subjective.
What surprised me the most is the fact, the creator of the term “object oriented”, Dr Alan Kay didn’t based the methodology on previously mentioned mechanics (Encapsulation, Composition, Polymorphism and Inheritance), those evolved further as side effects.
Alan Kay’s original conception was based on the following properties (thx Michael Kohl for providing this resource):
- Messaging (possible via Channels in Go)
In terms of communication between Objects, how modules, objects communicate should be designed rather than what their internal properties and behaviors should be
- Local retention, protection, and hiding of state-process (possible by defining public/private attributes and methods in Go)
- Extreme late-binding of all things (possible via higher-order-functions and Interfaces in Go)
A higher order function is a function that takes a function as an argument, or returns a function.
-.
Additional deep sources on OOP:
-
-
-
Local retention, protection, and hiding of state-process
In other words, Encapsulation
is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse.
Yet, everybody uses setters (a guaranteed way to ruin your state) and gets OOP wrong right at the beginning but that’s a topic for another discussion.
Encapsulation is all about maintaining your objects in a valid state and data hiding. Majority of OOP codebases are monoliths. Well, at least until 2017 before the whole Microservices boom…
Monoliths apps have usually one big state, shared memory and state access is controlled using famous private/protected/public attributes/methods.
You know what? That works very well for a wide range of applications if you follow DDD and achieve a proper encapsulation!
OOP strengths: Individualism, Encapsulation, Shared memory, Mutable State
How is encapsulation and shared memory achieved in object oriented languages?
public class FirstGopherlandCodeSnippet {
private boolean excitementLvl = 99;
}
What makes GoLang special?.
All this is very hard to achieve in current Object Oriented languages such as Java. Go took a different approach.
Do not communicate by sharing memory; instead, share memory by communicating
The number 1 feature of GoLang is the exact opposite definition of what OOP stands for. The alone definition, should be a sufficient first hint, suggesting a different implementation/way of thinking must be adapted.
This efficient communication is achieved especially using:
- Channels
- Pure Functions
- Action/Reaction; Request/Response function call design
- Chain/Impulses/Stream of events
What about encapsulation? How is encapsulation achieved in GoLang?
Encapsulation
in Go is done on a package level by exporting Structs and Functions by capitalizing their first character.
package tar
// Private, available only from within the tar package
type header struct {
}
// Public, available from other packages
func Export() {
}
// Private, available only from within the tar package
func doExport(h header) {
}
I like to think about it in this way:
A GoLang package is the equivalent of a typical*Manager/*Handler Class in Java.
Therefore, when designing your package specific scope encapsulation, start by designing a public API by defyining the minimum set of FUNCTIONS that shall be exposed to the outsite world.
An usage example of a standard library RSA pkg with clear input/output API:
package rsa
// EncryptPKCS1v15 encrypts the given message with RSA and the padding scheme from PKCS#1 v1.5.
func EncryptPKCS1v15(rand io.Reader, pub *PublicKey, msg []byte) ([]byte, error) {
return []byte("secret text"), nil
}
cipherText, err := rsa.EncryptPKCS1v15(rand, pub, keyBlock)
There is no need to jump into creating a Struct straight away just because we are used to create Classes such as RsaEncryptionManager/RsaHandler for everything all the time from previous, differently designed languages.
KISS.
Keep it simple stupid.
Keep it a function.
Struct vs Object
The responsibility of an Object is to:
- hide data from outsite world
- define behaviour
- perform messaging
- maintain a valid state
The responsibility of a Struct is to:
- maintain a valid state if implemented via pointers
- group together multiple fields, even of a different type
Go’s structs are typed collections of fields. They’re useful for grouping data together to form records.
Ability to directly attach methods should be done to specific, small Structs, before things get messy. Just because there isn’t yet a term “GodStruct” (well, now there is), it doesn’t mean a Struct is excused to look like this monstrosity:
I think the biggest “illusion” why Go looks like an Object Oriented language is due to the fact a Struct and an Object look so freaking simillar therefore is natural for developers coming from OO languages to write Go in such a manner. Except. They are not.
How does a Struct look like?
type MyFakeClass struct {
attribute1 string
}
Can such a Class (Struct) also have methods? Sure.
func (mc MyFakeClass) printMyAttribute() {
fmt.Println(mc.attribute1)
}
Perfect. Except… We have been both fooled like Instagram fanboys/fangirls by filters and convenient angles of that girl/guy from next doors…
The above method with a receiver argument (the Go term), can be also written as a regular function.
func printMyAttribute(mc MyFakeClass) {
fmt.Println(mc.attribute1)
}
This is the version Go compiler actually uses behind the scenes!
Thank you Mihalis Tsoukalos, author of a book Mastering Go for this epiphany!
Imagine you would be writing all your code in this manner, would you still create a GodStruct and pass it around by value all the time?
Struct vs Object behaviour
The difference is noticable from the following example.
We could achieve the desired “Object Oriented” behaviour by implementing a common pointer receiver function.
While this is a common pattern (technically, the only possible way to update a Struct without creating a new instance) and a recommended way of updating a Struct, I believe pointers should be used with a caution. But then, I am a huge fun of immutability so I may be slightly bias.
In the next section I am going to share my experience and some researched points when to use a Value Receiver and when a Pointer Receiver.
Value vs Pointer Receiver
When to use a Value Receiver:
- if the receiver is a map, func or chan
- if the receiver is a slice and the method doesn’t reslice or reallocate the slice
- if the receiver doesn’t mutate its state
- if the receiver is a small/medium array or struct that is naturally a value type (for instance, something like the
time.Timetype, XY coordinates, basically anything representing collection of fields), a ValueObject one could say
Advantages of a Value Receiver:
- concurrently safe, compatible with GoRoutines (that’s why I use Go in first place)
- Value Copy Cost associated with Value Receivers is not a performance bottleneck issue in absolute majority of applications
- can actually reduce the amount of garbage that can be generated; if a value is passed to a value method, an on-stack copy can be used instead of allocating on the heap. The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or free
- it directly, conciously forces to design small Structs
- easier to argue about encapsulation, responsibilities
- keep it simple stupid. Yes, pointers can be tricky because you never know the next project’s dev and Go surely is a young language
- obvious I/O
- unit testing is like walking through pink garden (slovak only expression?), means easy
- no NIL if conditions (a NIL can be passed to a pointer receiver and cause a panic attack)
Advantages of a Pointer Receiver:
- more efficient once allocated in terms of CPU and Memory (are those few ms actually a business requirement?)
- can maintain and mutate state
When to use a Pointer Receiver:
- working with large datasets, Pointer Receiver is more efficient
- if the developer writes high performance analytical application like NewRelic or a new Blockchain DataStore DB
- if the receiver is a Struct that contains a
sync.Mutexor similar synchronizing field, the receiver must be a pointer to avoid copying (Why mutex in first place? Use channels)
- if the receiver performs mutation (can be mostly avoided by designing pure functions with clear intention and obvious I/O)
- if a Struct is maintaining state, e.g. TokenCache
type TokenCache struct {
cache map[string]map[string]bool
}
func (c *TokenCache) Add(contract string, token string, authorized bool) {
tokens := c.cache[contract]
if tokens == nil {
tokens = make(map[string]bool)
}
tokens[token] = authorized
c.cache[contract] = tokens
}
My personal rules when designing a Struct to maintain its state:
- I make sure ALL attributes are PRIVATE, interaction is possible only via defined method receivers
- I don’t pass this Struct to any goroutine
General rule for both types
If your Struct has a Value Receiver method, all other methods of this Struct should be Value Receivers.
The same for pointers.
If your Struct has a Pointer Receiver method, all other methods of this Struct should be Pointer Receivers.
Consistency FTW!
Personal ultimate 2 rules:
- Write as many First Class Functions as possible and KISS..
func roll(s score) (score, bool) {
outcome := rand.Intn(6) + 1 // A random int in [1, 6]
if outcome == 1 {
return score{s.opponent, s.player, 0}, true
}
return score{s.player, s.opponent, outcome + s.thisTurn}, false
}
- Abstract business complexity into separate functions, even tight 1 function interfaces such as famous standard library io.Reader, io.Writer and comfortably pass them as function arguments, unit testing and mocking will become extremely convenient (more on this topic in incoming blog post)
type Reader interface {
Read(p []byte) (n int, err error)
}
type Writer interface {
Write(p []byte) (n int, err error)
}
func JSON(reader io.Reader) (ABI, error) {
Extra resources on topic Value vs Pointer vs Function in Go:
-
-
-
-
Summary
Go was not designed to be primarely Object Oriented language in the way languages such as Java/PHP/C++ were but the comparison is necessary because those are the languages majority of developers are coming from as fresh Gophers, bringing old solutions, techniques with them that may not necessarily be good solutions in Go environment.
Go is a combination of paradigms.
- It satisfies majority of Object Oriented characteristics (both mainstream and original ones, as stated at the beginning of the article) to solve general software extendibility issues and provide a way for developers to design domains more naturally
- It masters Functional/Procedural/Objects characteristics and Messaging to solve problems with concurrency and parallelism in the era of multicore CPU hardware architecture
Did you enjoy this article?
Let’s connect on Twitter for more:
The latest Tweets from Blocks by Lukáš (@BlocksByLukas). I help developers to master blockchain via UML Diagrams…twitter.com
Thanks and have a great day!
|
https://medium.com/gophersland/gopher-vs-object-oriented-golang-4fa62b88c701
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Contents
1
strong and enforceable labor standards and environmental
commitments; groundbreaking rules to ensure fair compe-
tition between State-owned enterprises and private compa-
nies; commitments that will improve transparency and make
it easier for small- and medium-sized businesses to export;
a robust intellectual property rights framework to promote
innovation, while supporting access to innovative and gener-
ic medicines; and obligations that will promote an open In-
ternet and a thriving digital economy.
RESOURCES
Read the full Bipartisan Congressional Trade Priorities & Accountabili-
ty Act of 2015 here:
2
American Interests & Values
The Context for TPP
3
Behind these statistics are real success stories for work-
ing families: the auto parts firm that would have closed its
line and gone dark had it not been for overseas markets; the
craftsman now finding customers around the world via the
internet; and the technology company or the family farm
that secured that new contract abroad. There are hundreds
of thousands more stories like these.
4
vironment and endangered species throughout all 12 partner
countries.
RESOURCES
For more information on President Obamas trade agenda, visit
5
The Trans-Pacific Partnership
Building on a Record of Success
GOODS
M A N U FA C T U R I N G
A G R I C U LT U R E
SERVICES
6
Objectives by Topic
Trade in Goods
The United States ships more than $1.9 billion in goods to TPP coun-
tries every day. In TPP, the United States is working to build on those
exports by negotiating comprehensive and preferential access across an
expansive, duty-free trading region for the industrial goods, food and agri-
culture products, and textiles, which will allow our exporters to develop and
expand their participation in the value chains of the fastest-growing econo-
mies in the world.
7 exam-
ple: U.S. poultry currently faces a 40-percent tariff in Malaysia. U.S. poultry
would become more affordable in Malaysia under a TPP Agreement that
reduces these tariffs to zero.
OBJECTIVES
Eliminate tariffs on trade between each TPP country and the United
States on the broadest possible basis, taking into account the need to
obtain competitive opportunities for U.S. exports while addressing U.S.
import sensitivities. This includes eliminating tariffs on U.S. manufac-
tured goods as well as on most agricultural products.
Obtain full reciprocal access to TPP country markets and more open
conditions of trade for U.S. textile and apparel products (see more detail
in the Textiles section of this summary).
8
Reaffirm and build on WTO commitments on technical barriers to trade
(TBT) (see more detail in TBT section of this summary).
RESOURCES
For more information on industrial and manufacturing
trade, visit-
MANUFACTURING
9
Textiles
U.S. textile and apparel manufacturers sold more than $10 billion
worth of products to TPP countries in 2013, an increase of 5.4 percent from
the previous year. Many U.S. yarns, fabrics, and apparel currently face very
high tariffs upon entering some TPP countries. Our goal in the TPP negotia-
tions is to remove tariff and non-tariff barriers to textile and apparel exports
to enhance the competitiveness of our producers in the Asia-Pacific region.
OBJECTIVES
Secure a yarn forward rule of origin, which requires that textile and
apparel products be made using U.S. or other TPP country yarns and
fabrics to qualify for the benefits of the Agreement.
Establish a carefully crafted short supply list, which would allow fab-
rics, yarns, and fibers that are not commercially available in TPP coun-
tries to be sourced from non-TPP countries and used in the production
of apparel in the TPP region without losing duty preference.
RESOURCES
For more information on textiles and apparel trade, visit
10
Services
Services industries account for four out of five jobs in the United
States, the worlds largest services exporter. In 2014, U.S. services exports
of $710 million supported an estimated 4.6 million jobs and expanded the
U.S. services trade surplus to $233 billion. American service workers are
in industries that include the Internet, information, and software services;
professional services; financial services; media and entertainment; express
delivery and logistics; scientific research and development, telecommunica-
tions, and others.
Securing liberalized and fair access to foreign services markets will help U.S.
service suppliers, both small and large, to increase exports to TPP markets
and support more jobs at home.
OBJECTIVES
Ensure that TPP countries do not discriminate against U.S. service sup-
pliers.
11
Establish new or enhanced obligations in specific sectors important to
promoting trade (e.g., enhanced disciplines for express delivery services
will promote regional supply chains and aid small businesses, which
often are highly dependent on express delivery services for integration
into supply chains and distribution networks).
Ensure that TPP service benefits are not open to shell companies con-
trolled by non-TPP countries.
RESOURCES
For more information on trade in services, visit
SERVICES
12
Investment
OBJECTIVES
13
Allow for the transfer of funds related to an investment covered under
the Agreement, with exceptions to ensure that governments retain the
flexibility to manage volatile capital flows.
Ensure that investors have the ability to appoint senior managers with-
out regard to nationality, and that nationality-based restrictions on the
appointment of board members does not impair an investors control
over its investment.
Conducting hearings open to the public.
Making public notices of arbitration, pleadings, submissions,
and awards.
Providing for the participation of civil society organizations and oth-
er outside parties through the submission of amicus curiae briefs
by:
Labor unions.
Environmental groups.
Public health advocates.
Other stakeholders.
14
RESOURCES
For more information on investment, visit
INVESTMENT
15
Labor
OBJECTIVES
Ensure that labor commitments are subject to the same dispute settle-
ment mechanism, including potential trade sanctions, that applies to
other chapters of the Agreement.
Establish rules that will ensure that TPP countries do not waive or
derogate from fundamental labor laws in a manner that affects trade or
investment, and that they take initiatives to discourage trade in goods
produced by forced labor, regardless of whether the source country is a
TPP country.
16
Protect against the degradation of either fundamental rights or working
conditions in export processing zones.
Establish a means for the public to raise concerns directly with TPP
governments if they believe a TPP country is not meeting its labor com-
mitments, and requirements that governments consider and respond to
those concerns.
RESOURCES
For more information on trade and labor, visit
17
Environment
OBJECTIVES
Protect and conserve flora and fauna, including through action by coun-
tries to combat illegal wildlife and timber trafficking.
18
Require TPP countries to ensure access to fair, equitable and transparent
administrative and judicial proceedings for enforcing their environmen-
tal laws, and provide appropriate sanctions or remedies for violations of
their environmental laws.
RESOURCES
For more information on trade and the environment, visit
19
E-Commerce and Telecommunications
In the past five years, the number of Internet users worldwide has bal-
loon Inter-
net-based commerce. This is a central area of American leadership and one
of the worlds great opportunities for economic growth. TPP is designed to
preserve the single, global, digital marketplace to ensure the free flow of
global information and data that drive the digital economy. In doing so, we
intend to promote trade and investment that enhances online speed, access,
and quality.
OBJECTIVES
20
Ensure close cooperation among TPP countries to help businesses, es-
pecially small- and medium-sized businesses, overcome obstacles and
take advantage of electronic commerce.
RESOURCES
For more information on e-commerce and ICT, visit
TELECOM-E-COMMERCE
21
State-Owned Enterprises
OBJECTIVES
Ensure that SOEs make commercial purchases and sales on the basis of
commercial considerations.
Ensure that SOEs that receive subsidies do not harm U.S. businesses
and workers.
22
Competition Policy
23
Small and Medium-Sized Enterprises
Our goal is to use TPP to provide SMEs the tools they need to compete
across the Asia-Pacific region. TPP will benefit SMEs by eliminating tariff
and non-tariff barriers, streamlining customs procedures, strengthening in-
tellectual property protection, promoting e-commerce, and developing more
efficient and transparent regulatory regimes.
TPP will include a first-ever chapter focusing on issues that create particular
challenges for SMEs.
OBJECTIVES
Eliminate high tariffs across the TPP region that price out many goods
and agricultural products sold by U.S. small businesses.
Promote digital trade and Internet freedom to ensure that small busi-
nesses can access the global marketplace.
24
Help small businesses integrate into global supply chains.
RESOURCES
For more information on small- and medium-sized busi-
nesses, visit
25
Intellectual Property Rights
In TPP, we are working to advance strong and balanced rules that will
protect and promote U.S. exports of IP-intensive products and services
throughout the Asia-Pacific region for the benefit of producers and consum-
ers, tech-
nological, and medical innovation, and take part in development and enjoy-
ment of new media and the arts.
OBJECTIVES
26
Establish strong measures to prevent theft of trade secrets, including
cyber theft of trade secrets.
Establish rules that promote transparency and due process with respect
to trademarks and geographical indications.
27
Promote affordable access to medicines, taking into account levels
of development among the TPP countries and their existing laws
and international commitments.
Make it easier for businesses to search, register, and protect their trade-
marks and patents in new markets, which is particularly important for
small businesses.
RESOURCES
For more information on intellectual property rights, visit-
PROPERTY
28
Technical Barriers to Trade
OBJECTIVES
29
Secure commitments aimed at enhancing cooperation in key sectors
such as wine and distilled spirits, medical devices, cosmetics, pharma-
ceuticals, and information & communication technology.
RESOURCES
For more information on technical barriers to trade, visit-
MULTILATERAL-AFFAIRS/WTO-ISSUES/TECHNICAL-
BARRIERS-TRADE
30
Sanitary and Phytosanitary Measures
Our goal is to ensure that SPS measures will be developed and implement-
ed in a transparent and non-discriminatory manner, based on science. TPP
will also help expand our agricultural exports by addressing unscientific,
discriminatory, and otherwise unwarranted barriers that are often designed
to keep American goods out of the market. TPP will require no changes to
existing U.S. food safety laws or regulations.
OBJECTIVES
Affirm and build upon WTO commitments on SPS, making clear that
countries determine for themselves the level of protection they believe
to be appropriate to protect food safety, and plant and animal health.
31
Establish an on-going mechanism for improved dialogue and coopera-
tion on addressing SPS and TBT issues.
RESOURCES
For more information on SPS, visit
SANITARY-AND-PHYTOSANITARY-MEASURES-AND-
TECHNICAL-BARRIERS-TRADE
32
Transparency and Anticorruption
OBJECTIVES
33
Establish, for the first time in a U.S. trade Agreement, a chapter on reg-
ulatory coherence, including provisions on widely-accepted good regu-
latory practices, already standard in the United States, such as impact
assessments, public transparency and communications around regula-
tions, and public notice of government measures.
34
Customs, Trade Facilitation, and Rules of Origin
Cutting the red tape of trade, including by reducing costs and increas-
ing customs efficiencies, will make it cheaper, easier, and faster for busi-
nesses.
This is particularly valuable for small- and medium-sized enterprises that
find it difficult to navigate complex customs procedures.
OBJECTIVES
Ensure that, to the greatest extent possible, shipments are kept in ports
no longer than necessary to comply with customs laws.
TPPs rules of origin provisions are designed to ensure that only goods that
originate in the TPP region receive preferential treatment under the Agree-
mentthis approach supports production and jobs in the United States and
helps link U.S. firms into regional supply chains, reducing the incentive for
companies to move production abroad in order to remain competitive.
35
Establish strong and common rules of origin to ensure that the benefits
of TPP go to the United States and other TPP countries.
Ensure that goods only receive the benefits of the Agreement only if
they are wholly obtained or produced within the TPP region; produced
in a TPP country exclusively from other TPP originating materials; or
produced in a TPP country from materials that meet the product-specif-
ic rules.
Put in place a common TPP-wide system for traders to show that their
goods are made in the TPP region and for customs to verify that traders
are following the rules of origin.
RESOURCES
For more information on rules of origin, visit-
MULTILATERAL-AFFAIRS/WTO-ISSUES/CUSTOMS-ISSUES/
RULES-ORIGIN
36
Government Procurement
Nothing in TPP will prevent the United States government from buying
American goods and servicesbut TPP will unlock significant opportunities
for U.S. businesses and workers to increase their access to government pro-
curement markets in TPP countries. TPP countries are fast-growing markets
in which governments are expanding their buying and building as they grow
more prosperous. TPP is an opportunity to sell more American-made ma-
chinery, medical technologies, transportation and infrastructure equipment,
information technology, communications equipment, and other goods and
services.
OBJECTIVES
37
Keep in place domestic preferential purchasing programs such as:
Preference programs for small businesses, women and minority
owned businesses, service-disabled veterans, and distressed areas.
Buy America requirements on Federal assistance to state and
local projects
Transportation services, food assistance, and farm support.
Key Department of Defense procurement.
Maintain broad exceptions for special government procurement regard-
ing:
National security.
Measures necessary to protect public morals, order, or safety.
Protecting human, animal, or plant life or health.
Protecting intellectual property.
Allow for labor, environmental, and other criteria to be included in con-
tracting requirements.
RESOURCES
For more information on government procurement, visit-
PROCUREMENT
38
Development and Trade Capacity-Building
The United States views trade as an important tool for improving ac-
cess to economic opportunity for women and low income individuals; incen-
tivizing private-public partnerships in development activities; and designing
sustainable models for economic growth. In addition, the United States sees
trade capacity-building as critical to assisting TPP developing countries such
as Vietnam, Peru, Brunei, and others, in implementing the Agreement and
ensuring they can benefit from it.
TPP will include a chapter on cooperation and capacity building and, for the
first time in any U.S. trade Agreement, a chapter dedicated specifically to
development.
OBJECTIVES
Remove tariff and non-tariff barriers that limit trade and impede growth
and development.
Establish the strongest standards for protecting workers and the envi-
ronment ever included in a trade Agreement.
39
Promote policies related to education, science and technology,
research and innovation.
RESOURCES
For more information on trade and development, visit
40
Dispute Settlement
OBJECTIVES
RESOURCES
For more information on dispute settlement, visit-
MULTILATERAL-AFFAIRS/WTO-ISSUES/DISPUTE-
SETTLEMENT
41
U.S.-Japan Bilateral Negotiations on
Motor Vehicle Trade
The United States and Japan also agreed to address non-tariff measures
through parallel negotiations to TPP, which were launched in August 2013.
OBJECTIVES
42
Establish an accelerated dispute settlement procedure that would apply
to the automotive sector that includes a mechanism to snap back
tariffs as a remedy.
RESOURCES
Toward the Trans-Pacific Partnership: U.S. Consultations
with Japan-
OFFICE/FACT-SHEETS/2013/APRIL/US-CONSULTATIONS-
JAPAN
43
Office of the United States Trade Representative
600 17th Street, Northwest, Washington, D.C. 20508
(202) 395-6121
@USTradeRep facebook.com/USTradeRep
|
https://www.scribd.com/document/341399890/TPP-Detailed-Summary-of-US-Objectives
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
How to: Write a parallel_for_each Loop
This example shows how to use the concurrency::parallel_for_each algorithm to compute the count of prime numbers in a std::array object in parallel.
Example> using namespace concurrency; using namespace std; // Calls the provided work function and returns the number of milliseconds // that it takes to call that function. template <class Function> __int64 time_call(Function&& f) { __int64 begin = GetTickCount(); f(); return GetTickCount() - begin; } // Determines whether the input value is prime. bool is_prime(int n) { if (n < 2) return false; for (int i = 2; i < n; ++i) { if ((n % i) == 0) return false; } return true; } int wmain() { // Create an array object that contains 200000 integers. array<int, 200000> a; // Initialize the array such that a[i] == i.
Compiling the Code
To compile the code, copy it and then paste it in a Visual Studio project, or paste it in a file that is named
parallel-count-primes.cpp and then run the following command in a Visual Studio Command Prompt window.
cl.exe /EHsc parallel-count-primes.cpp
Robust Programming.
See also
Parallel Algorithms
parallel_for_each Function
Feedback
Send feedback about:
|
https://docs.microsoft.com/en-us/cpp/parallel/concrt/how-to-write-a-parallel-for-each-loop?view=vs-2017
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
.
Overview
Fist thing first let’s create a Stream using fs2. Unlike Akka Streams that always require an actor system and a materializer, an fs2 stream can be created straight away:
val stream = Stream(1, 2, 3, 4)
An interesting thing to note is the type of the stream:
fs2.Stream[fs2.Pure, Int]
So an fs2 stream has 2 type parameters. The second one is the one you expect as it represents the type of the elements of the stream – e.g.
Int.
The first type is a type constructor which corresponds to the effect type. Here
Pure means that the stream doesn’t require any effect to be evaluated.
If you were to integrate with cats-effect (let’s say to use cats IO – but fs2 can use any effect) you would create a stream of type
fs2.Stream[IO, Int]
Of course you can no longer use
fs2.Stream.apply to build the stream as the IO effect needs to be evaluated to product an element in the stream. This is simply done with the
eval method
fs2.Stream.eval(IO(2))
Like cats-effect, creating a Stream doesn’t run the stream. To “run” the stream we need to “compile” it and then use one of the cats effect run method:
fs2.Stream.eval(IO(2)) .compile .toList .unsafeRunSync
Now that we know how to create a stream let’s jump straight into some usage patterns and see how it compares to Akka streams.
Patterns
Flattening a stream
Unlike Akka stream, f2 doesn’t provide a
mapConcat method but it’s still pretty easy thing to do. We use
emits to create a stream from a collection which we can then just
flatMap:
fs2.Stream.emits('A' to 'E') .map(letter => (1 to 3).map(index => s"$letter$index")) .flatMap(fs2.Stream.emits) // this flattens the stream .compile .toList // or fs2.Stream.emits('A' to 'E') .map(letter => fs2.Stream.emits(1 to 3).map(index => s"$letter$index")) .flatten .compile .toList
]
This generates the element in sequence and gives a
List(A1, A2, A3, B1, B2, B3, C1, ...)
It processes all the elements of the first stream before moving to the second stream. Now let’s imagine that each stream is infinite (e.g.
A1, A2, A3, A4, A5, ...). In this case the B elements (
B1, B2, B3, B4, ...) are never reached.
If we want to process the streams in parallel we can use
parJoin
fs2.Stream.emits[IO, Char]('A' to 'E') .map(letter => fs2.Stream.emits[IO, Int](1 to 3).map(index => s"$letter$index")) .parJoin(5)
Now the streams are processed in parallel in a non-deterministic way. One possible out-come is
D1, D2, D3, A1, A2, ...
Note that we’re using
IO here because
parJoin requires a concurrent effect.
Alternatively if you want to consume the stream in a breadth-first like fashion you have to do a little more work yourself (I’m not aware of anything usable out-of-the-box)
def breadthFirst[F, E](streams: Stream[F, Stream[F, E]]): Stream[F, Stream[F, E]] = Stream.unfoldEval(streams) { streams => val values = streams.flatMap(_.head) // get the head of each stream val next = streams.map(_.tail) // continue with the tails values.compile.toList.map(_.headOption.map(_ => values -> next)) // stop when there's no more values }
Batching
As with Akka Streams batching is straight forward with baked in methods:
Stream.emits(1 to 100).chunkN(10).map(println).compile.drain
A
Chunk is a finite sequence of values that is used by fs streams internally:
val s = Stream(1, 2) ++ Stream(3) ++ Stream(4, 5, 6) val chunks = s.chunks.toList // List(Chunk(1, 2), Chunk(3), Chunk(4, 5, 6))
And if you want to batch within a specific time window
groupedWithin is what you need:
Stream.awakeEvery[IO](10.millis) .groupWithin(100, 100.millis) .evalTap(chunk => IO(println(s"Processing batch of ${chunk.size} elements"))) .compile .drain.unsafeRunTimed(1.second)
Asynchronous computation
Here fs2 has clearly the advantage as asynchronicity depends directly on the effect type
F which must have an
Async[F] in scope.
It offers all the
eval methods:
evalMap,
evalTap,
evalScan,
evalMapAccumulate.
If you want to run asynchronous effect in parallel the effect type must have an instance of
Concurrent[F] in scope. If it’s the case the
parEval methods are available:
parEvalMap and
parEvalMapAccumulate.
Let’s keep the same example where a program write asynchronously to a database with
def writeToDatabase[F[_]: Async](chunk: Chunk[Int]): F[Unit] = Async[F].async { callback => println(s"Writing batch of $chunk to database by ${Thread.currentThread().getName}") callback(Right(())) }
we can then write batches in parallel to the database with
fs2.Stream.emits(1 to 10000) .chunkN(10) .covary[IO] .parEvalMap(10)(writeToDatabase[IO]) .compile .drain .unsafeRunSync()
Note that
parEvalMap preserves the stream ordering. If this is not required there is a
parEvalMapUnordered method.
If you’d like some consistency with the Akka Streams API you’d be glad to know that there is a
mapAsync (and
mapAsyncUnordered) methods that are just aliases for
parEvalMap (and
parEvalMapUnordered respectively).
Concurrency
In fs2 the async boundaries can be controlled by directly by the effect computations. Let’s consider a similar example as with Akka Streams where a streams runs through a series of stages (or pipes)
def pipe[F[_] : Sync](name: String): Stream[F, Int] => Stream[F, Int] = _.evalTap { index => Sync[F].delay( println(s"Stage $name processing $index by ${Thread.currentThread().getName}") ) } Stream.emits(1 to 10000) .covary[IO] .through(pipe("A")) .through(pipe("B")) .through(pipe("C")) .compile .drain .unsafeRunSync()
As expected this program uses a single thread and each element is process in sequentially through the pipes.
Now if we change our pipe definition to
def pipe[F[_] : Sync: LiftIO](name: String): Stream[F, Int] => Stream[F, Int] = _.evalTap { index => (IO.shift *> IO(println(s"Stage $name processing $index by ${Thread.currentThread().getName}"))).runAsync(_ => IO.unit).to[F] ) }
IO.shift places an async boundary giving a chance to use another thread for execution. The
runAsync method runs the computation without waiting for its result.
If applied to the same stream as before the elements are still processed sequentially (
1 processing starts before
2 which starts before
3 ….) and the stages
A,
B and
C are also started in order (
A starts, then
B, then
C).
However we no longer wait for each stage to finish and as we use different threads the execution becomes non-deterministic.
Throttling
Fs2 provides a mechanism to create a stream that emits an element on a fix interval. If zipped to another stream it limits the rate of the second stream:
Stream.awakeEvery[IO](1.second) zipRight Stream.emits(1 to 100)
As it’s a very common pattern fs2 provide us with a method
metered that do just that.
If instead of limiting the rate of the stream you prefer to discard some elements you can use
debounce
val ints = Stream.constant[IO, Int](1).scan1(_ + _) // 1, 2, 3, ... ints.debounce(1.second)
This emits an element at a fixed rate discarding every element produced in between.
Alternatively if you want to accept the first X elements emitted during a time interval and discard any other element until the end of the interval this is a little more involved as there is nothing that comes out of the box.
E.g. if we want 100 elements per second, we want to keep the first 100 elements then discard any other elements until another second starts.
val ints = Stream.constant[IO, Int](1).scan1(_ + _) // 1, 2, 3, ... val ticks = fs2.Stream.every[IO](1.second) // emits true every second val rate = 100 // 100 elements per second val throttledInts = ints.zip(ticks).scan((0, rate + 1)) { case (_, (n, true)) => (n, 0) // new second start, emit element and reset counter case ((_, count), (n, _)) => (n, count+1) // emit elements and increment counter } .filter(_._2 < rate) // keep only the elements where counter is less than rate .map(_._1) // remove counter
Idle timeout
Akka stream has an
idleTimeout methods that fails a stream if no element are emitted within a given timeout.
Fs2 doesn’t provide something similar but this is trivial to implement
def idleTimeout[F[_], A]( s: fs2.Stream[F, A], timeout: FiniteDuration )( implicit F: Concurrent[F], timer: Timer[F] ): fs2.Stream[F, A] = s.groupWithin(1, timeout).evalMap( _.head.fold[F[A]](F.raiseError(new Exception("timeout")))(F.pure) )
The idea is to use
groupWithin then check the
Chunk. If it contains an element that’s good we emit the element otherwise the chunk is empty so we raise the error.
Error Handling
Any exception raised during the stream processing (or explicitly calling
Stream.raiseError) ends the stream in error. The error handling is similar to the cats way of doing this
Stream.raiseError(new Exception("Oops")) .handleErrorWith { error => Stream(error.getMessage) }
Fs2 provides a flexible way to deal with retries. Be it retrying a fixed number of times, with or without backoff, … You can retry a single effect (or a whole stream if it is compiled into a single effect)
val ints = ( Stream.emits(1 to 10) ++ Stream.raiseError(new Exception("the end")) ).covary[IO].evalTap(n => IO(println(n))).compile.drain Stream.retry( ints, delay = 1.second, // delay before first retry nextDelay = _ * 2, // doubles the delay for every retry maxAttempts = 5, _ => true // retry on any error ).compile.drain.unsafeRunSync()
Handling ressources
As clearly mentioned in the fs2 documentation error handling is not meant for freeing up resources. For that matter fs2 provides a much safer way of doing things via the concept of
bracket.
That probably sounds familiar as it’s the same concept as provided by cats-effect.
Using
bracket makes sure that the resources are always freed whatever the outcome of the computation:
val acquire = IO(println("Aquiring resource")) *> IO(new Random()) val release = (_: Random) => IO(println("Releasing resource")) Stream.bracket(acquire)(release).flatMap { rand => fs2.Stream.emits(1 to 10).map(_ => rand.nextInt()) }.evalTap(n => IO(println(n))).compile.drain.unsafeRunSync()
There are other bracket methods like
resource (which takes a resource directly) and
bracketCase (which let’s you know how the computation ended in the release phase)
Conclusion
I found Fs2 API both simple and complete enough to write powerful applications. It might not cover all cases but the building blocks are powerful enough to let you write code to suit your own needs.
Embedding the effect inside the stream is powerful concept as it allows to express all your computation as a stream (including the side-effect) and then just use
compile.drain to run it.
Fs2 has been a pleasure to work with so far and it has my preferences over Akka Streams (it doesn’t require an ActorSystem nor relies on Scala Futures). It is flexible enough to work with any effect and the building blocks are easy to combine. Plus a thorough documentation, what’s left to ask?
|
https://www.beyondthelines.net/programming/streaming-patterns-with-fs2/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
#include <Keypad.h>const byte ROWS = 4; // Four rowsconst byte COLS = 4; // Four columns// Define the Keymapchar keys[ROWS][COLS] = { {'1','2','3','A'}, {'4','5','6','B'}, {'7','8','9','C'}, {'*','0','#','D'},};// Connect keypad ROW0, ROW1, ROW2 and ROW3 to these Arduino pins.byte rowPins[ROWS] = { 6, 8, 7, 9 };// Connect keypad COL0, COL1 and COL2 to these Arduino pins.byte colPins[COLS] = { 2, 5, 4, 3 }; // Create the KeypadKeypad kpd = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS );void setup(){ Serial.begin(9600);}void loop(){ char key = kpd.getKey(); if(key) // Check for a valid key. { Serial.println(key); }}
int row[4] = {36,34,32,30}; // from left to right, the first four connectionsint col[4] = {28,26,24,22}; // from left to right, the next four connectionsint results[4][4] = {{1,2,3,-2},{4,5,6,-3},{7,8,9,-4},{-6,0,-7,-5}}; // This is the array to match up the keystroke with the associated value // I use the special keys as function keys, so I want to capture those strokes.void setup(){ for (int x=0;x<4;x++) { pinMode(row[x],OUTPUT); // I assign the rows as output pinMode(col[x],INPUT_PULLUP); // I assign the columns as inputs and use the internal pullup (probably not necessary, but I did it anyway) digitalWrite(row[x],HIGH); // I cycle through and set up the entire row index as high. } Serial.begin(38400); // This is used just to debug. Make sure you test all the keys, as it is easy to plug things into the wrong pin.}int keyboard_read() { // This is the meaty function to scan the keypad int value = -1; // I set -1 as my null value, as I want to be able to receive back a zero key-stroke for (int x=0;x<4;x++) { digitalWrite(row[x],LOW); // I set the active row as low delay(5); // I put the delay in, because I want to be sure the full row is pulled low, again probably not necessary for (int y=0;y<4;y++) { if (digitalRead(col[y]) == LOW) { // As I cycle through, the key pressed will be the one that returns a low signal delay(250); // I know I'm cheating, but I just put a delay in after a key-stroke to make sure there is time to let the key up before the next read value=results[x][y]; // After reading which row and column was pressed, I pull the associated value from the key array. } } digitalWrite(row[x],HIGH); // Don't forget to set the row back to high, so the next row read is correct. }return(value); // The function then returns the value read, or the default -1 if no key was pressed}void loop(){ int key_stroke; key_stroke=keyboard_read();if (key_stroke != -1) { // Yes, I could have read the function straight in, but because of where I put the delay on a key-press Serial.println(String(key_stroke)); // I would often have the if statement trigger, only to read the board again and return the no-key -1 value}}
|
http://forum.arduino.cc/index.php?topic=146106.msg1097667
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
The repository for high quality TypeScript type definitions.
Also see the definitelytyped.org website, although information in this README is more up-to-date.TypedTyped.
First, fork this repository, install node, and run
npm install.
Edit an existing packageEdit an existing package
cd types/my-package-to-edit
- Make changes. Remember to edit tests.
-,Typed.
sourceRepoURL: This should point to the repository that contains the typings.
libraryName: Descriptive name of the library, e.g. "Angular 2" instead.
LintLint
To lint a package, just add a
tslint.json to that package containing
{ "extends": "dtslint/dt.json" }. All new packages must be linted.Typed hours. If it's been more than 24 hours, ping @RyanCavanaugh and @andy-ms on the PR.
A package uses.Typed and deprecate the associated
@types package.
I want to update a package to a new major versionI want to update a package to a new major version
Before making your change, please create a new subfolder with the current version e.g.
v2, and copy existing files to it. You will need to:
- Update the relative paths in
tsconfig.jsonas well as
tslint.json.
- Add path mapping rules to ensure that tests are running against the intended version.
For example history v2
tsconfig.json looks like:
{ "compilerOptions": { "baseUrl": "../../", "typeRoots": ["../../"], "paths": { "history": [ "history/v2" ] }, }, "files": [ "index.d.ts", "history-tests.ts" ] }
Please note that unless upgrading something backwards-compatible like
node, all packages depending of the updated package need a path mapping to it, as well as packages depending on those.
For example,
react-router depends on
history@2, so react-router
tsconfig.json has a path mapping to
"history": [ "history/v2" ];
transitively
react-router-bootstrap (which depends on
react-router) also adds a path mapping in its tsconfig.json., or to use a default import like
import foo from "foo"; if using the
--allowSyntheticDefaultImports flag if your module runtime supports an interop scheme for non-ECMAScript modules as such.
LicenseLicense
This project is licensed under the MIT license.
Copyrights on the definition files are respective of each contributor listed at the beginning of each definition file.
|
https://libraries.io/npm/@types%2Flodash/4.14.70
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
1 /*2 * $Header: /home/cvs/jakarta-commons-sandbox/cli/src/java/org/apache/commons/cli/AlreadySelectedException.java,v 1.4 2002/06/06 09:37:26 jstrachan Exp $3 * $Revision: 1.4 $4 * $Date: 2002/06/06 09:37:26 $5 *6 * ====================================================================7 *8 * The Apache Software License, Version 1.19 *10 * Copyright (c) 1999 package org.apache.commons.cli;62 63 /** 64 * <p>Thrown when more than one option in an option group65 * has been provided.</p>66 *67 * @author John Keyes ( john at integralsource.com )68 * @see ParseException69 */70 public class AlreadySelectedException extends ParseException {71 72 /** 73 * <p>Construct a new <code>AlreadySelectedException</code> 74 * with the specified detail message.</p>75 *76 * @param message the detail message77 */78 public AlreadySelectedException( String message ) {79 super( message );80 }81 }82
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/apache/commons/cli/AlreadySelectedException.java.htm
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
//**************************************
// Name: Save Embedded Resource to File
// Description:To save an Embedded Resource and save to file. This will take any embedded resource and save it to any filename you want.
// By: Victor Boba (from psc cd)
//**************************************
' Notes:
' The file that you add to the project has to have the Build Action
' property changed to "Embedded Resource" for this to work. This will add the file
' to the project as a resource and not a compiled item.
'
Dim ResStream As System.IO.Stream
' This is the name of your applications namespace followed by the name of the file that's embedded.
' So if your namespace is "MyProject" and the name of the file is "BlankDatabase.mdb" then
' the value for the sResPath would be "MyProject.BlankDatabase.mdb".
Dim sResPath As String = "AnimalControl.db.mdb"
Dim NewFilePathName As String = sPath
Dim numBytesRead As Integer = 0
' Get the Embedded Resource
ResStream = System.Reflection.Assembly.GetExecutingAssembly.GetManifestResourceStream(sResPath)
Dim numBytesToRead As Integer = CInt(ResStream.Length)
Dim bytes(ResStream.Length) As Byte
While numBytesToRead + 1 > 0
Dim n As Integer = ResStream.Read(bytes, numBytesRead, numBytesToRead)
' The end of the file has been reached.
If n = 0 Then
Exit While
End If
numBytesRead += n
numBytesToRead -= n
End While
' Save the resource to file
Dim fs As New FileStream(NewFilePathName, FileMode.Create)
fs.Write(bytes, 0, bytes.Length)
fs.
|
http://planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=2268&lngWId=10
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
Blocking Transitions
Let's explore how to block transitions. We are going to make it so that if we change any user information when adding or editing a user, the application will block any transitions away until we confirm that we want to transition away. For example, imagine we change some info and then want to cancel. It's just a way to make sure a user doesn't accidentally lose their work.
We get this functionality by using the
Prompt component that ships with
react-router-dom. Let's first add the prompt to the
UserForm component. Place this at the top of the
Form component returned from the
render method.
<Prompt message="Are you sure you wanna do that?" />
We have created a
Prompt and given it a prop of
message. This will be the message the user sees when the prompt shows.
Also make sure to import the
Prompt component into this file.
import { Prompt } from 'react-router-dom';
Now if you go to an edit page and then press the cancel button, the prompt will show up! However, we only want it to show up if we change some information in the form. Luckily, we can pass the
Prompt component a
when prop. When this is
true, it will then show the prompt when a transition is attempted. If it's false, it won't show it. This means we need a variable in state that tracks whether information has been updated. This is easy since we have a single method that runs whenever an input is updated.
Inside
UserForm, update the creation of state in the
constructor method.
this.state = { user, formChanged: false };
Next, inside
handleChange, make sure state is updated so we know the form data has been changed.
handleChange(e, { name, value }) { const { user } = this.state; this.setState({ user: { ...user, [name]: value }, formChanged: true, }); }
Pull this
formChanged piece of state out of state in the
render method.
const { user: { name, email, phone, address, city, zip }, formChanged } = this.state;
Lastly, pass the value of
formChanged as the
when prop to the
Prompt component we just created.
<Prompt when={formChanged}
Now, try changing some data in an edit form and see what happens. The prompt shows up like before. Now reload that page and just click on the "Cancel" button. It doesn't show the prompt! Awesome! That wasn't too bad.
In the next video, we will discuss how we can show multiple routes that both match in separate areas.
|
https://scotch.io/courses/using-react-router-4/blocking-transitions
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 241370
pread() always sets the offset=0 if gcc option -D_FILE_OFFSET_BITS=64 is set
Last modified: 2016-11-24 09:58:52 EST
Description of problem:
If a C code is compiled with the -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
flag then the offset argument for pread() is passed as 0 or some garbage
number. This happens only on a powerPC. The sample code to reproduce this is as
follows:
===================================================================
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
int main(void)
{
int fdin, fdout, ret;
char *buf;
off_t off=73728;
buf=(char *) malloc(131072);
if (!buf) {printf ("Bad"); exit(0);}
fdin = open("/oradata/BAN25/system01.dbf", O_RDONLY);
if(fdin<0) printf("fdin error\n");
fdout = open("/tmp/checksystem.dbf", O_WRONLY | O_CREAT);
if(fdout<0) printf("fdout error\n");
ret = pread(fdin, buf, 8192, off);
if(ret!=8192){printf("error read\n");}
ret = write(fdout, buf, 8192);
if(ret!=8192){printf("error write\n");}
close(fdin);
close(fdout);
}
===================================================================
and was compiled as follows
#gcc -g -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 try.c
The strace output for the same is attached with this bug (strace.out).
strace was generated as:
#strace -f ./a.out >strace.out 2>&1
Now pread would work fine if these changes were done.
-> If pread() is changed to pread64() in teh code, then things work fine.
-> also ifthe code is compiled with -D_XOPEN_SOURCE=500 and things worked fine
but unfortunately I cannot use the -D_XOPEN_SOURCE=500 flag as this flag
seems to uset the following type definations: uint, u_long etc, which are
being used in the code base.
The same code was compliled on an Intel and AMD system and pread() worked fine
without the -D_XOPEN_SOURCE=500 compilation flag. That is the offset was passed
sucessfully.
Why does this happen?
Version-Release number of selected component (if applicable):
This fails on a powerpc with either of the gcc versions glibc-2.3.4-2.19 and
glibc-2.3.4-2.25
How reproducible:
Always reproducable (consistant)
Steps to Reproduce:
1. Write a C code with the pread() call
2. Set the debug flag -D_FILE_OFFSET_BITS=64 for gcc and compile.
3. strace output for the execution shows that pread64()is called internally and
the offset value is changed to some junk value or 0.
I am passing the offset as 73728.
Actual results:
pread64(3, "\0\242\0\0\377\300\0\0\0\0\0\0\0\0\0\0\341\304\0\0\0\0"..., 8192, 0)
Expected results:
pread64(3, "\0\242\0\0\377\300\0\0\0\0\0\0\0\0\0\0\341\304\0\0\0\0"..., 8192, 73728)
Additional info:
If I directly call pread64() within the code, things work fine.
Created attachment 155458 [details]
strace output for the code.
Please read
info libc 'Feature Test Macros'
and try to use
-Wimplicit-function-declaration (part of -Wall)
on your sources (and, unrelated, remember that when O_CREAT is used, 3rd argument
to open is mandatory).
pread and pread64 functions were added in SUSv3, therefore they are only
available in the headers with -D_XOPEN_SOURCE=500, -D_XOPEN_SOURCE=600,
or -D_GNU_SOURCE.
When you don't have prototype of a function, in addition to e.g. miscompiling
functions that return some other type than int, with -D_FILE_OFFSET_BITS=64 the
non-*64 functions aren't redirected to their *64 counterparts either. So,
in your testcase, the result is basically:
int pread (); // implicit function declaration
...
pread (fdin, buf, 8192, off); // int, void *, uint, long long arguments
But, pread takes int, void *, uint, long arguments. On little endian 32-bit arch
that means just using the bottom 32 bits of off, on big endian 32-bit arch
usually the top 32 bits of off, but on ppc additionally long long values are
passed only starting on odd registers, so int, void *, uint, long long is
passed as int, void *, uint, pad, high 32 bits, low 32 bits.
|
https://bugzilla.redhat.com/show_bug.cgi?id=241370
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Applies to:
Windows XP Service Pack 3
Windows XP SP 3
Note: You should check for the latest version of the different files.
Note: These are not scheduled to be included with a service pack. i.e. There is no Windows XP Service Pack 4 (SP4).
Note: The last Windows XP SP3 hotfix was from July 2012.
List of Windows XP related hotfixes post SP3 for Windows XP SP3 of Sep. 2012:
2732488 "NTVDM.EXE has encountered a problem and needs to close" error message when you use 16-bit applications in Windows XP
Ntkrnlmp.exe 5.1.2600.6259
Ntkrnlpa.exe 5.1.2600.6259
Ntkrpamp.exe 5.1.2600.6259
Ntoskrnl.exe 5.1.2600.6259
2705368 "0x0000007E" Stop error when you use McAfee Device Control tool or a similar tool to restrict USB audio in Windows XP SP3
Usbaudio.sys 5.1.2600.6223
2494305 You cannot apply a Windows Installer package after you install hotfix 979465 on a computer that is running Windows XP
Msi.dll 4.5.6002.22572
Msihnd.dll 4.5.6002.22193
2454533 Description of a shared folder that is mapped to a network drive is not displayed on a Windows XP SP3-based computer that has security update MS10-066 installed
Davclnt.dll 5.1.2600.6047
Webclnt.dll 5.1.2600.6047
2282612 You cannot play an AVI file in Windows XP after you install security update 975560
Quartz.dll 6.5.2600.6010
2270406 "0x000000D1" Stop error message when you try to refresh a webpage in Internet Explorer on a computer that is running Windows XP SP3
Tcpip.sys
Update.exe
Update.ver
982551 You cannot close a console window of an application after you stop debugging the application in Visual Studio on a computer that is running Windows XP or Windows Server 2003
Csrsrv.dll 5.1.2600.5981
981669 The installation process of a MSI package that contains multiple packages stops responding (hangs) in Windows XP, Windows Vista, or Windows Server 2008
Msi.dll 4.5.6002.22362
Msihnd.dll 4.5.6002.22193
978835 Service cannot access the \?? namespace in Windows XP
Services.exe 5.1.2600.5922
976426 Internet Explorer opens many windows when you click a bookmark link and then close the existing Internet Explorer window quickly
Mshtml.dll 6.00.2900.5897
975167 FIX: Error message when you run a Windows Sockets application that opens many connections on a Windows XP Embedded-based device: "Stop 0x00000044"
Afd.sys 5.1.2600.5877
972828 Files that are copied from a Windows Server 2008-based remote computer to a Windows XP SP3-based client computer by using the Remote Desktop Connection 6.1 client are corrupted
Lhmstscx.dll 6.0.6001.22499
972483 On a computer that is running Windows XP, GDI object handles become leaked when you run a multithreaded application that uses a GDI object
Win32k.sys 5.1.2600.5829
972397 A hotfix for Windows Installer is available for Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008
Msi.dll 4.5.6002.22172
Msihnd.dll 4.5.6002.22193
971455 A computer that is running Windows XP SP3 cannot authenticate a wireless router that uses the Wi-Fi Protected Setup (WPS) when the router is configured to use Wired Equivalent Privacy (WEP)
Wzcdlg.dll 5.1.2600.5815
971165 The CLIENTNAME environment variable returns the value "Console" instead of the actual client name when users first log on to a Windows XP SP3-based computer by using Remote Desktop Connection
Termsrv.dll 5.1.2600.5815
970922 TIFF documents are corrupted when you rotate them in Windows Picture and Fax Viewer on a computer that is running Windows XP SP3
Shimgvw.dll 6.0.2900.5815
970685 Error message when you try to access an SD card on a Windows XP-based computer that has a particular combination of SD host controller and SD card: "The disk in drive <X> is not formatted"
Sdbus.sys 6.0.4069.5813
970048 Slow printing performance when you print to an LPR printer from a Windows XP-based computer
Tcpmon.dll 5.1.2600.5795
969744 Underlines are missing when you print a document on a computer that runs Windows Vista, Windows Server 2008, or Windows XP Service Pack 3
Stddtype.gdl
Stdnames.gpd
Stdschem.gdl
Stdschmx.gdl
Unidrv.dll
Unidrv.hlp
Unidrvui.dll 0.3.6002.22123
Unires.dll 0.3.6002.22123
969395 FIX: Windows Movie Maker crashes when you install more than 100 video transitions or video effects on a computer that is running Windows XP
Qedit.dll 6.5.2600 .5783
969262 Windows XP stops responding when heavy I/O operations occur on an NTFS-formatted volume.
Ntfs.sys 5.1.2600.5782
969179 A Windows XP-based computer becomes unresponsive or displays a "STOP 0x000000A5" error message during system shutdown
Sdbus.sys 6.0.4069.5780
969145 A Windows XP Service Pack 3-based computer crashes when you use an SD card
Sdbus.sys 6.0.4069.5778
969111 A Windows XP Service Pack 3-based client computer cannot use the IEEE 802.1x authentication when you use PEAP with PEAP-MSCHAPv2 in a domain
Rastls.dll 5.1.2600.5780
968967 The CPU usage of an application or a service that uses MSXML 6.0 to handle XML requests reaches 100% in Windows Server 2008, Windows Vista, Windows XP Service Pack 3, or other systems that have MSXML 6.0 installed
Msxml6.dll 6.20.1102.0
968764 Streaming USB 1.1 devices that are connected to an external USB 2.0 hub behave incorrectly on a Windows XP-based, Windows Vista-based, or Windows Server 2008-based computer
Usbehci.sys 5.1.2600.5778 30,336 18-Mar-2009 11:02 x86 SP3
Usbport.sys 5.1.2600.5778
967885 The focus moves through windows in the wrong order when you run a RemoteApp program on a Windows XP, Windows Vista, or Windows Server 2008-based client computer
Lhmstscx.dll 6.0.6001.22493
967493 FIX: You cannot use the TAB key to move the focus out of an XBAP that is hosted in an IFrame element on a computer that is running the .NET Framework 3.0 Service Pack 2
Presentationcore.dll 3.0.6920.4000
Presentationframework.dll 3.0.6920.4000
Presentationhost.exe 3.0.6920.4000
Presentationhostdll.dll 3.0.6920.4000
Presentationhostproxy.dll 3.0.6920.4000
967048 Error message on a Windows XP-based computer that has a USB card reader: "Stop 0x000000D1"
Usbccid.inf
Usbccid.sys 5.2.3790.4476
961853 Error message when you try to access a network share in a private network: "There are currently no logon servers available to service the logon request"
Netlogon.dll 5.1.2600.5741
961555 A Windows Server 2003 Service Pack 2-based computer or a Windows XP Service Pack 3-based computer randomly crashes
Ntkrnlmp.exe 5.1.2600.5845
Ntkrnlpa.exe 5.1.2600.5845
Ntkrpamp.exe 5.1.2600.5845
Ntoskrnl.exe 5.1.2600.5845
961187 If you reconnect a removable storage device to a computer that is running Windows XP, the operating system cannot find the removable storage device
Mountmgr.sys 5.1.2600.5771
Mountvol.exe 5.1.2600.5771
960921 If you start a Windows XP-based portable computer while it is running on battery power, the brightness of the LCD screen is not decreased as expected
Videoprt.sys 5.1.2600.5745
960655 You encounter several problems on a Windows XP SP3-based computer when the EAP-TLS machine authentication fails during system startup
Dot3msm.dll 5.1.2600.5745
959682 FIX: A Message Queuing 3.0 message is rejected on the receiver when you send the message by using an external certificate from a Windows XP Service Pack 3-based computer
Mqac.sys 5.1.0.1111
Mqad.dll 5.1.0.1111
Mqdscli.dll 5.1.0.1111
Mqise.dll 5.1.0.1111
Mqqm.dll 5.1.0.1111
Mqrt.dll 5.1.0.1111
Mqsec.dll 5.1.0.1111
Mqupgrd.dll 5.1.0.1111
Mqutil.dll 5.1.0.1111
959658 A memory leak problem occurs when you run an application that uses the HttpSendRequest function of the WinHTTP API or of the WinINet API to send Secure Sockets Layer requests in Windows XP Service Pack 3
Crypt32.dll 5.131.2600.5707
959465 Write protection does not always work on SD memory cards that are plugged into a computer that runs Windows XP, Windows Vista, or Windows Server 2008
Sdbus.inf
9
Ariblk.ttf
959237 FIX: Internet Explorer may crash when you browse a Web page that constantly fetches a recordset asynchronously and filters the recordset at the same time from an instance of SQL Server
Msadce.dll 2.81.3008.0
959160 When you run an application that uses the CryptEnumProviderTypes function application in Windows XP Service Pack 3, you receive the error message ERROR_MORE_DATA (0xea)
Advapi32.dll 5.1.2600.5793
958877 A Windows XP-based client computer can establish a security association to a peer computer even though the client computer does not have a System Health Authentication OID
Oakley.dll 5.1.2600.5696
958347 After you hot unplug a device that is connected through a 1394 FireWire hub on a Windows XP-based computer, the device is still present in the system
1394bus.sys 5.1.2600.5689
958259 Some embedded OLE objects that you created in Office 2008 for Mac cannot be edited in an installation of Office that is running on a Windows XP-based computer
Ole32.dll 5.1.2600.5692
Acadproc.dll
Apphelp.sdb
Apph_sp.sdb
Apps.chm
Apps_sp.chm
Drvmain.sdb
Sysmain.sdb
958244 The system may stop responding when you restart a Windows XP-based multicore computer
Halmacpi.dll 5.1.2600.5687
Halmps.dll 5.1.2600.5687
958071 You receive error code 1206 when you run an application that uses the WLanSetProfile function on a Windows XP Service Pack 3-based computer
Wlanapi.dll 5.1.2600.5684
958058 When you try to log on to a Windows XP SP3-based computer by using a roaming profile, the roaming profile cannot load.
Lsasrv.dll 5.1.2600.5792
957931 A Windows XP-based, Windows Vista-based, or Windows Server 2008-based computer does not respond to 802.1X authentication requests for 20 minutes after a failed authentication
Dot3svc.dll 5.1.2600.5745
957808 After you start or stop the Network Access Protection Agent service on a Windows XP Service Pack 3-based client computer, the value of the IKEFlags registry entry may change
Napipsec.dll 5.1.2600.5683
957502 Error message when you try to open some MMC 3.0 snap-ins in a localized version of Windows XP Service Pack 3: "MMC could not create the snap-in.
Mmcs.chm
Mmcshext.dll 5.2.3790.4136
957218 A user name that contains Unicode characters is not handled correctly in Windows XP Service Pack 3 during the EAP authentication
Eappgnui.dll 5.1.2600.5663
Eapphost.dll 5.1.2600.5663
957495 The action controls in Sound Recorder are missing or only partly visible if you set the font size to Large or to Extra Large in a non-English version of Windows XP
Sndrec32.exe 5.1.2600.5671
957070 When a user offers a Remote Assistance invitation to an expert user who is running Windows XP SP3, the expert’s logon attempt forcibly logs off the original user
Lhmstscx.dll 6.0.6001.22307
956630 FIX: Persisted workflows do not run after you upgrade the tracking service to a newer version in Windows Workflow Foundation
System.workflow.runtime.dll 3.0.4203.4076
956588 The job-level PrintTicket is associated with the document-level PrintTicket and with the page-level PrintTicket when you print a document to an XPS driver on a Windows XP-based computervcs.amd64.dll 6.0.6001.22204
Xpssvcs.dll 6.0.6001.22204
955830 When you connect to a Windows XP-based computer by using a remote desktop connection, the computer may be not able to automatically enter the Sleep mode again after you log off the computer
Tdtcp.sys 5.1.2600.5770
955408 If you have hotfix 885222 applied on a Windows XP SP2-based computer, and then you upgrade to Windows XP SP3, an installed 1394b FireWire device reverts from S400 speed to S100
Cstupd1394sidspeed.dll 5.1.2600.5657
955147 The installation process may stop responding when you try to install some software on a Windows XP-based computer
Msi.dll 3.1.4001.5659
955109 Error message when you run an application that uses the Application Desktop Toolbar (AppBar) component on a computer that is running Windows XP SP2 or Windows XP SP3: “0xC0000005 (Access Violation)”
Explorer.exe 6.0.2900.5634
954879 The LSASS.exe process crashes and the computer restarts when you try to start the Network Access Protection Agent service on a Windows XP Service Pack 3 -based client computer
Oakley.dll 5.1.2600.5626
954550 Some Microsoft XPS features are not available in Windows Server 2003 and in Windows XP
$shtdwn$.req
Filterpipelineprintproc.amd64.dll 6.1.3790.4316
Filterpipelineprintproc.dll 6.1.2600.5635
Printfilterpipelinesvc.exe
Prntvpt.dllhhdr.dll 6.0.6001.22212
Xpssvcs.amd64.dll 6.0.6001.22204
Xpssvcs.dll 6.0.6001.22204
954232 The On-Screen Keyboard behavior on a Windows XP-based computer does not mimic the physical keyboard behavior in certain scenarios
Osk.exe 5.1.2600.5620
953979 Device Manager may not show any devices and Network Connections may not show any network connections after you install Windows XP Service Pack 3 (SP3)
Fixccs.exe 5.1.2600.5614
953955 The Win32_Processor class returns the incorrect Name property for the processors on a computer that is running Windows XP or Windows Server 2003 and the computer has Intel Core 2 Duo processors installed
Cimwin32.dll 5.1.2600.5636
953761 Some DHCP Options are not recognized on a Windows XP SP3-based client computer when the DHCP server offer includes option 43
Dhcpcsvc.dll 5.1.2600.5614
953760 When you enable SSO for a terminal server from a Windows XP SP3-based client computer, you are still prompted for user credentials when you log on to the terminal server
Kerberos.dll 5.1.2600.5615
Msv1_0.dll 5.1.2600.5749
953609 Error message when you try to add a wireless network to a Windows XP-based computer that has hotfix 917021 applied: "At least one of your changes was not applied successfully to the wireless configuration"
Wzcdlg.dll 5.1.2600.5621
953028 On a computer that is running Windows Server 2003 or Windows XP, an application experiences an access violation and then crashes if the computer has more than four cores or more than four logical processors
D3d9.dll 5.3.2600.5601
952079 You cannot use the Backup Utility to restore certain EFS-encrypted files on a Windows XP-based computer and data corruption occurs
Ntfs.sys 5.1.2600.5585
951830 When you disable and then re-enable the LAN-side network adapter on a Windows XP SP3-based computer that is configured as a Connection Sharing host, a client computer on the network cannot access the Internet
Ipnathlp.dll 5.1.2600.5584
951822 You receive an error message, the print operation fails, or partial pages are printed when you try to print to a Citizen printer or to an Alps printer in Windows XP Service Pack 3
Alpsres.dll 0.3.1282.0
951347 A memory leak occurs when you use the IFaxIncomingMessageIterator interface to query incoming fax messages on a fax server that is running Windows Server 2003 or Windows XP
Fxscomex.dll 5.2.2600.5588
951126 A multiprocessor computer that is running Windows XP stops responding on a black screen after you resume the computer from hibernation
Halaacpi.dll 5.1.2600.5573
Halacpi.dll 5.1.2600.5573
Halapic.dll 5.1.2600.5573
Halmacpi.dll 5.1.2600.5573
Halmps.dll 5.1.2600.5573
950616 An audio application that uses the Portcls.sys file may stop responding when you run the audio application on a computer that is running Windows XP
Portcls.sys 5.1.2600.5566
950565 FIX: Error message when you insert data to a table that contains two image columns by using the SQL Server ODBC driver: "[Microsoft][ODBC SQL Server Driver][SQL Server]Invalid locator de-referenced"
Odbcbcp.dll 2000.85.3000.0
Sqlsrv32.dll 2000.85.3000.0
950234 Error message when you try to open a shared file in Windows Explorer on a Windows XP-based computer: "<SharePath> is not accessible. Access is denied"
Shell32.dll 6.0.2900.5672
949860 On a Windows XP-based computer, when you start an application by running as another user account, the content in an application’s dialog box or menu does not update automatically
Shell32.dll 6.0.2900.5555
948239 A Windows XP-based computer stops responding when you click Cancel in a dialog box
Win32k.sys 5.1.2600.5591
947460 Error message when you try to open a mapped DFS folder after the computer comes out of standby in Windows XP: "<Drive Letter>:\ is not accessible"
Cscdll.dll 5.1.2600.5731
Mrxsmb.sys 5.1.2600.5731
947100 After a COM application handles an access violation on a Windows Server 2003-based computer or on a Windows XP-based computer, the COM application stops responding
Ole32.dll 5.1.2600.5648
946411 FIX: When you print an XPS file on a Windows XP Service Pack 2 or Service Pack 3-based computer, the characters in the XPS file print incorrectly
Tswpfwrp.exe 3.0.6920.1201
940458 When you use Remote Desktop Connection to connect to a Windows Server 2003-based terminal server, or to Windows XP, the first drive letter of the computer from which you connect is not redirected to the Terminal Services session
Rdpdr.sys 5.1.2600.5616
932578 Event ID 55 may be logged in the System log when you create many files on an NTFS partition on a Windows Server 2003-based or Windows XP-based computer
Ntfs.sys 5.1.2600.5712
932521 A Windows XP-based client computer uses an archived certificate for network authentication after a new certificate is auto-enrolled in a wireless Active Directory domain
Rastls.dll 5.1.2600.5586
Hotfix 950234 should be installed together with hotfix 947853 support.microsoft.com/…/950234
Hotfix 947853 is included in SP3 support.microsoft.com/…/946480 Sorry for bothering.
Oops! The link for hotfix 947853 is support.microsoft.com/…/947853
How about someone downloading all of these, and making them available after the burial.
yeah someone need to make pack of hotfixes for post xp sp3
unoffical sp4 somewhere
|
https://blogs.technet.microsoft.com/yongrhee/2011/06/12/list-of-post-sp3-related-hotfixes-for-windows-xp-sp3/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Earnie Boyd - 2013-04-23
- Description has changed:
Diff:
--- old +++ new @@ -8,7 +8,7 @@ #endif ~~~~~~ -That __GNUC_MINOR__ >= 4 should perhaps be __GNUC_MINOR__ == 4. Right +That \_\_GNUC_MINOR\_\_ >= 4 should perhaps be \_\_GNUC_MINOR\_\_ == 4. Right now a hypothetical gcc 3.5.0 would give the error message. Of course, there is no gcc 3.5.x (as 4.0.x came after 3.4.x), so there is not really any problem.
|
https://sourceforge.net/p/mingw/bugs/1958/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Converting a single channel PIL Image to a 1-D NumPy array and back is easy:
import numpy import PIL # Convert Image to array img = PIL.Image.open("foo.jpg").convert("L") arr = numpy.array(img) # Convert array to Image img = PIL.Image.fromarray(arr)
Tried with: Python 2.7.3 and Ubuntu 12.04
Advertisements
One thought on “How to convert between NumPy array and PIL Image”
I am working on Python project involving Tkinter and OpenCV. I stumbled on this trick you used. Thank you very much for sharing.
|
https://codeyarns.com/2014/01/16/how-to-convert-between-numpy-array-and-pil-image/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Tcl/Tk developers have constructed many interesting widget sets which extend Tk's basic functionality. A few of these--Tix, for example--are reasonably well known and accessible to Tkinter users. What about the rest? When a TkInter programmer sees a promising Tk extension, is it likely to do him or her any good?
Briefly, yes. First, it's important to make the distinction between so-called "pure Tk" extensions and those that involve (external) C-coded compilation. Quite a few useful widgets sets, most notably including BWidgets and tklib, are "pure Tk". That means that Tcl/Tk programmers simply read them in at run time, with no need for (re-)compilation, configuration, or other deployment complexities.
These extensions are nearly as easy for TkInter programmers to use. Here's an example:
If you have a file of Tcl code in a file called foo.tcl and you want to call the Tcl function foo_bar then
import Tkinter root = Tkinter.Tk() root.tk.eval('source {foo.tcl}') root.tk.eval('foo_bar')
will load and execute foo_bar. To see the details of passing and returing arguments, Use the Source Luke, and look at Lib/lib-tk/Tkinter.py. For wrappers of other popular Tk widgets, look at the Python/ directory of the Tixapps distribution
On the other hand, Tix and BLT are popular Tk extensions which require compilation. These days (since version 8.0 of Tk) most extensions are compiled as dynamic loading packages, and are as easy to load into Tkinter as pure Tk extensions using a Python expression like
root.tk.eval('package require Ext')
For an example of this, see the Lib/lib-tk/Tix.py file in the standard library that loads the Tix extension.
The trick here is to install the extension library directory in a place the Tcl in TkInter will find it. The best place to try is as a subdirectory of Tcl/ in the Python installation. If this does not work, look into the file pkgIndex.tcl in the extension's library directory and try to understand what it is doing to load the .dll or .so shared library. To ask Tcl to consider a specific directory that contains a package, use
root.tk.eval('lappend auto_path {%s}' % the_tcl_directory)
FredrikLundh has pages which he calls "work in progress", but which readers are certain to find helpful: ( is gone Fredrik recommends ) and The latter explicitly extends Tkinter through use of Tcl. Also, Gustavo Cordero is working in this same area; his work is likely to show up in the Tcl-ers' Wiki for Tkinter.
|
https://wiki.python.org/moin/How%20Tkinter%20can%20exploit%20Tcl/Tk%20extensions?highlight=Tk
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
ncl_nggcog man page
NGGCOG — Returns the latitudes and longitudes of a set of points approximating a circle at a given point on the surface of the globe.
Synopsis
CALL NGGCOG (CLAT,CLON,CRAD,ALAT,ALON,NPTS)
C-Binding Synopsis
#include <ncarg/ncargC.h>
void c_nggcog(float clat, float clon, float crad, float *alat,
float *alon, int npts)
Description
- CLAT
(an input expression of type REAL) is the latitude, in degrees, of a point on the globe defining the center of the circle.
- CLON
(an input expression of type REAL) is the longitude, in degrees, of a point on the globe defining the center of the circle.
- CRAD
(an input expression of type REAL) specifies the radius of the circle. This is given as a great-circle distance, in degrees.
- ALAT
(an output array, of type REAL, dimensioned NPTS) is an array in which the latitudes of points on the circle are to be returned.
- ALON
(an output array, of type REAL, dimensioned NPTS) is an array in which the longitudes of points on the circle are to be returned.
- NPTS
(an input expression, of type INTEGER) is the desired number of points to be used to represent the circle. Its value determines how accurately the circle will be represented.
C-Binding Description
The C binding argument descriptions are the same as the FORTRAN argument descriptions.
Usage
Let C represent (CLAT,CLON) and let O represent the center of the globe. The circle is the set of all points P on the globe such that the angle POC is of the size specified by CRAD.
SIN and COS are used to generate points representing a circle having the desired radius and centered at the North Pole. These points are then subjected to two rotations - one that brings the circle down to the desired latitude, and another that carries it to the desired longitude.
Examples
Use the ncargex command to see the following relevant example: cpex10.
Access
To use NGGCOG or c_nggcog, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order.
Messages
None.
See Also
Online: nggsog(3NCARG), ngritd(3NCARG).
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement.
|
https://www.mankier.com/3/ncl_nggcog
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
JBoss.orgCommunity Documentation
3.0.13.Final.
Resteasy is bundled with JBoss AS 7. You may have the need to upgrade Resteasy in AS7. The Resteasy distribution comes with a zip file called resteasy-jboss-modules-3.0.13.Final.zip. Unzip this file while with the modules/ directory of the JBoss AS7 distribution. This will overwrite some of the existing files there.
Resteasy is bundled with JBoss EAP 6.1. You may have the need to upgrade Resteasy in JBoss EAP 6.1. The Resteasy distribution comes with a zip file called resteasy-jboss-modules-3.0.13.Final.zip. Unzip this file while with the modules/system/layers/base/ directory of the JBoss EAP 6.1 distribution. This will overwrite some of the existing files there.
Overwriting Resteasy modules in EAP 6 will place your installation in a less supportable state because Red Hat does not provide direct support for Resteasy 3.x or JAX-RS 2.x in EAP 6; this will only be supported by the community.
Resteasy is bundled with Wildfly. You may have the need to upgrade Resteasy in Wildfly. The Resteasy distribution comes with a zip file called resteasy-jboss-modules-wf8-3.0.13.Final.zip. Unzip this file while with the modules/system/layers/base/ directory of the Wildfly distribution. This will overwrite some of the existing files there.
RESTEasy is bundled with JBoss/Wildfly and completely integrated as per the requirements of Java EE 6. First you must at least provide an empty web.xml file. You can of course deploy any custom servlet, filter or security constraint you want to within your web.xml, but the least amount of work is to create an empty web.xml file. Also, resteasy context-params are available if you want to tweak turn on/off any specific resteasy feature.
<web-app </web-app>
Since we're not using a jax-rs servlet mapping, we must define an Application class that is annotated with the @ApplicationPath annotation. If you return any empty set for by classes and singletons, your WAR will be scanned for JAX-RS annotation resource and provider classes.
import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/root-path") public class MyApplication extends Application { }
The Resteasy distribution has ported the "Restful Java" O'Reilly workbook examples to AS7. You can find these under the directory examples/oreilly-workbook-as7.
Resteasy and JAX-RS are automically loaded into your deployment's classpath, if and only if you are deploying a JAX-RS Application. If you only want to use the client library, you will have to create a dependency for it within your deployment. Also, only some resteasy features are automatically loaded. To bring in these libraries, you'll have to create a jboss-deployment-structure.xml file within your WEB-INF directory of your WAR file. Here's an example:
<jboss-deployment-structure> <deployment> <dependencies> <module name="org.jboss.resteasy.resteasy-yaml-provider" services="import"/> </dependencies> </deployment> </jboss-deployment-structure>
The
services attribute must be set to import for modules that have default providers
that must be registered. The following
table specifies which modules are loaded by default when JAX-RS services are deployed and which aren't.
If you are using resteasy outside of JBoss/Wildfly, in a standalone servlet container like Tomcat or Jetty
you will need to include the core Resteasy jars in your WAR file. Resteasy provides integration with standalone
Servlet 3.0 containers via the
ServletContainerInitializer integration interface. To
use this, you must also include the
resteasy-servlet-initializer artifact in your WAR
file as well.
<dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-servlet-initializer</artifactId> <version>3.0.13.Final</version> </dependency>
We strongly suggest that you use Maven to build your WAR files as RESTEasy is split into a bunch of different modules. You can see an example Maven project in one of the examples in the examples/ directory. If you are not using Maven.
The
resteasy-servlet-initializer artifact will not work in Servlet versions older than
3.0. You'll then have to manually declare the Resteasy servlet in your WEB-INF/web.xml file of your WAR project.
For example:
. parameters. public String".");
extends Customer { This is a nice; ... }"., if.
As a consumer of XML datasets, JAXB is subject to a form of attack known as the XXE (Xml eXternal Entity) Attack (), in which expanding an external entity causes an unsafe file to be loaded. Preventing the expansion of external entities is discussed in Section.. Resteasy supports both Jackson 1.9.x and Jackson 2.2.x. Read further on how to use each.
If you're deploying Resteasy outside of JBoss AS7 add the resteasy jackson provder to your WAR pom.xml build
<dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson-provider</artifactId> <version>3.0.13.Final</version> </dependency>
If you're deploying Resteasy with JBoss AS7, there's nothing you need to do except to make sure you've updated your AS7 distribution with the latest and greatest Resteasy. See the installation sectio of this documentation for more details.
If you're deploying Resteasy outside of JBoss AS7 add the resteasy jackson provder to your WAR pom.xml build
<dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson2-provider</artifactId> <version>3.0.13.Final</version> </dependency>
If you want to use Jackson 2.2.x inside of JBoss AS7 you'll have to create a jboss-deployment-structure.xml file within your WEB-INF directory. By default AS7 includes the Jackson 1.9.x JAX-RS integration, so you'll want to exclude it from your dependencies, and add the jackson2 ones.
<jboss-deployment-structure> <deployment> <exclusions> <module name="org.jboss.resteasy.resteasy-jackson-provider"/> </exclusions> <dependencies> <module name="org.jboss.resteasy.resteasy-jackson2-provider" services="import"/> </dependencies> </deployment> </jboss-deployment-structure
If you are using the Jackson 2.2.x".
No, this is not the JSONP you are thinking of! JSON-P is a new Java EE 7 JSON parsing API. Horrible name for a new JSON parsing API! What were they thinking? Anyways, Resteasy has a provider for it. If you are using Wildfly, it is required by Java EE 7 so you will have it automatically bundled. Otherwise, use this maven dependency.
<dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-json-p-provider</artifactId> <version>3.0.13.Final</version> </dependency>
It has built in support for JsonObject, JsonArray, and JsonStructure as request or response entities. It should not conflict with Jackson or Jettison if you have that in your path too.(); // You must call close to delete any temporary files created // Otherwise they will be deleted on garbage collection or on JVM exit void close(); }); } input.close(); } }<<() {}); } input.close(); } }(MultipartConstants(MultipartConstants(MultipartConstants.MULTIPART_RELATED) public void putXopWithMultipartRelated(@XopWithMultipartRelated Xop xop) { // do very important things here } }
We used @Consumes(MultipartConstants; } }.
Since 3.0.13.Final? JAX-RS 2.0 has the javax.ws.rs.ext.ParamConverterProvider to help in this situation. See javadoc for more details..0.13.
@NameBinding public @interface DoIt {} @DoIt public class MyFilter implements ContainerRequestFilter {...} @Path("/root") public class MyResource { @GET @DoIt public String get() {...} }.) { e.printStackTrace(); } } }; t.start(); } }
AsyncResponse also has other methods to cancel the execution. See javadoc for more details.
NOTE: The old Resteasy proprietary API for async http has been deprecated and may be removed as soon as Resteasy 3.1..0.13.0.13.Final</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> </dependency>); }.0.13.Final</version> </dependency>> JBoss AS7 security infrastructure to secure your web applications and restful services. You can turn an existing web app into an OAuth 2.0 Access Token Provider or you can turn a JBoss AS7 Security Domain into a central authentication and authorization server that a whole host of applications and services can use. Here are the features in a nutshell:.
The Resteasy distribution comes with an OAuth2 Skeleton key example. This is a great way to see OAuth2 in action and how it is configured. You may also want to use this as a template for your applications.
JBoss AS 7.1.x or higher
HTTPS is required. See the JBoss 7 documentation on how to enable SSL for web applications
A username/password based JBoss JBoss AS7 JBoss AS7 JBoss. is deprecated and will be removed in subsequent versions of Resteasy unless there is an outcry from the community. We're focusing on OAuth 2.0 protocols. Please see our OAuth 2.0 Work.(); }
JSON Web Signature and Encryption (JOSE JWT) is a new specification that can be used to encode content as a string and either digitally sign or encrypt it. I won't go over the spec here Do a Google search on it ifyou); }>3.0.13.Final<>3.0.13.Final<); Response res = target.request().post(Entity.entity(output, "application/pkcs7-m> ....
<!-- XML : generated by JHighlight v1.0 () --> <span class="xml_tag_symbols"><</span><span class="xml_tag_name">dependency</span><span class="xml_tag_symbols">></span><span class="xml_plain"></span><br /> <span class="xml_plain"> </span><span class="xml_tag_symbols"><</span><span class="xml_tag_name">groupId</span><span class="xml_tag_symbols">></span><span class="xml_plain">org.jboss.resteasy</span><span class="xml_tag_symbols"></</span><span class="xml_tag_name">groupId</span><span class="xml_tag_symbols">></span><span class="xml_plain"></span><br /> <span class="xml_plain"> </span><span class="xml_tag_symbols"><</span><span class="xml_tag_name">artifactId</span><span class="xml_tag_symbols">></span><span class="xml_plain">resteasy-cdi</span><span class="xml_tag_symbols"></</span><span class="xml_tag_name">artifactId</span><span class="xml_tag_symbols">></span><span class="xml_plain"></span><br /> <span class="xml_plain"> </span><span class="xml_tag_symbols"><</span><span class="xml_tag_name">version</span><span class="xml_tag_symbols">></span><span class="xml_plain">${project.version}</span><span class="xml_tag_symbols"></</span><span class="xml_tag_name">version</span><span class="xml_tag_symbols">></span><span class="xml_plain"></span><br /> <span class="xml_tag_symbols"></</span><span class="xml_tag_name">dependency</span><span class="xml_tag_symbols">></span><span class="xml_plain"></span><br /> 3.
Add the RequestScopeModule to your modules to allow objects to be scoped to the HTTP request by adding the @RequestScoped annotation to your(); } }
JAX-RS 2.0 introduces a new client API so that you can make http requests to your remote RESTful web services. It is a 'fluent' request building API with really 3 main classes: Client, WebTarget, and Response. The Client interface is a builder of WebTarget instances. WebTarget represents a distinct URL or URL template from which you can build more sub-resource WebTargets or invoke requests on.
There are really two ways to create a Client. Standard way, or you can use the ResteasyClientBuilder class. The advantage of the latter is that it gives you a few more helper methods to configure your client.
Client client = ClientBuilder.newClient(); ... or... Client client = ClientBuilder.newBuilder().build(); WebTarget target = client.target(""); Response response = target.request().get(); String value = response.readEntity(String.class); response.close(); // You should close connections! ResteasyClient client = new ResteasyClientBuilder().build(); ResteasyWebTarget target = client.target("");
Resteasy will automatically load a set of default providers. (Basically all classes listed in all META-INF/services/javax.ws.rs.ext.Providers files). Additionally, you can manually register other providers, filters, and interceptors through the Configuration object provided by the method call Client.configuration(). Configuration also lets you set various configuration properties that may be needed.
Each WebTarget has its own Configuration instance which inherits the components and properties registered with its parent. This allows you to set specific configuration options per target resource. For example, username and password.
The Resteasy Proxy:
Client client = ClientFactory.newClient(); WebTarget target = client.target(""); ResteasyWebTarget rtarget = (ResteasyWebTarget)target; SimpleClient simple = rtarget.proxy(SimpleClient.class); client.putBasic("hello world");
Alternatively you can use the Resteasy client extension interfaces directly:
ResteasyClient client = new ResteasyClientBuilder().build(); ResteasyWebTarget target = client.target(""); SimpleClient simple = target.proxy(SimpleClient.class); client.putBasic("hello world");
javax.ws.rs.core.Response class:
@Path("/") public interface LibraryService { @GET @Produces("application/xml") Response getAllBooks(); }
It is generally possible to share an interface between the client and server. In this scenario, you just have your JAX-RS services implement an annotated interface and then reuse that same interface to create client proxies to invoke on the client-side.();); ApacheHttpClient4Engine engine = new ApacheHttpClient4Engine(httpClient); request or a call on
a proxy returns a class other than
Response,
then Resteasy will take care of releasing the connection. For example,
in the fragments
WebTarget target = client.target(""); String answer = target.request().get(String.class);
or
ResteasyWebTarget target = client.target(""); RegistryStats stats = target.proxy
Response, then Response.close() method must be used to released the connection.
WebTarget target = client.target(""); Response response = target.request().get(); System.out.println(response.getStatus()); response.close();
You should probably execute this in a try/finally block. Again, releasing a connection only makes it available for another use. It does not normally close the socket.
On the other hand,
ApacheHttpClient4 50 50 50 50 provides the support for validation mandated by the JAX-RS: Java API for RESTful Web Services 2.0 , given the presence of an implementation of the Bean Validation specification 1.1 such as Hibernate Validator 5.x..x will ship with Hibernate Validator 5.x.
Validation is not included in the original JAX-RS specification, but RESTEasy 2.x provides a form of validation, including parameter and return value validation, based on Bean Validation 1.0 plus Hibernate 4.x extensions. For applications running in the context of Hibernate Validation 4.x, RESTEasy 3.x inherits the validation semantics from RESTEasy 2.x. This version of validation is in the RESTEasy module resteasy-hibernatevalidate-provider, which produces the artifact resteasy-hibernatevalidator-provider-<version>.jar. It follows the validation sequence given in the first section, detecting field, property, class, parameter, and return value constraints, though with a somewhat less rich semantics than resteasy-validator-provider-11.
Unlike resteasy-validator-provider-11, resteasy-hibernatevalidate-provider does not do validation
testing by default. Validation must be turned on. There are two relevent annotations -
org.jboss.resteasy.plugins.validation.hibernate.ValidateRequest and
org.jboss.resteasy.plugins.validation.hibernate.DoNotValidateRequest -
that are used to indicate what needs validation or not. We can tell RESTEasy to validate any method
in a resource annotating the resource:
(... } supplies two implementations of
GeneralValidator,
in the modules resteasy-validator-provider-11 and resteasy-hibernatevalidator-provider.); }
Both supplied validators implement.0.13.Final with the current Resteasy version you want to use.
<repositories> <repository> <id>jboss</id> <url></url> </repository> </repositories> <dependencies> <!-- core library --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxrs</artifactId> <version>3.0.13.Final</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-client</artifactId> <version>3.0.13.Final</version> </dependency> <!-- optional modules --> <!-- JAXB support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxb-provider</artifactId> <version>3.0.13.Final</version> </dependency> <!-- multipart/form-data and multipart/mixed support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-multipart-provider</artifactId> <version>3.0.13.Final</version> </dependency> <!-- Resteasy Server Cache --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-cache-core</artifactId> <version>3.0.13.Final</version> </dependency> <!-- Ruby YAML support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-yaml-provider</artifactId> <version>3.0.13.Final</version> </dependency> <!-- JAXB + Atom support --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-atom-provider</artifactId> <version>3.0.13.Final</version> </dependency> <!-- Spring integration --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-spring</artifactId> <version>3.0.13.Final</version> </dependency> <!-- Guice integration --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-guice</artifactId> <version>3.0.13.Final</version> </dependency> <!-- Asynchronous HTTP support with Servlet 3.0 --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>async-http-servlet-3.0</artifactId> <version>3.0.13.Final<>3.0.13.Final<. the Installation/Configuration section of this document for more information.
|
http://docs.jboss.org/resteasy/docs/3.0.13.Final/userguide/html_single/index.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
csQuaternion Class Reference
[Geometry utilities]
Class for a quaternion. More...
#include <csgeom/quaternion.h>
Detailed Description
Class for a quaternion.
A SE3 rotation represented as a normalized quaternion
- See also:
- csDualQuaternion
Definition at line 42 of file quaternion.h.
Constructor & Destructor Documentation
Initialize with identity.
Definition at line 48 of file quaternion.h.
Initialize with given values. Does not normalize.
Definition at line 53 of file quaternion.h.
Construct from a vector and given w value.
Definition at line 58 of file quaternion.h.
Copy-constructor.
Definition at line 63 of file quaternion.h.
Member Function Documentation
Set this quaternion to its own conjugate.
Definition at line 187 of file quaternion.h.
Return euclidian inner-product (dot).
Definition at line 193 of file quaternion.h.
Get the exponential of this quaternion.
Get a quaternion as axis-angle representation.
- Parameters:
-
Definition at line 251 of file quaternion.h.
Get the conjugate quaternion.
Definition at line 181 of file quaternion.h.
Get quaternion as three Euler angles X, Y, Z, expressed in radians.
Get quaternion as a 3x3 rotation matrix.
Get the logarithm of this quaternion.
Interpolate this quaternion with another using normalized linear interpolation (nlerp) using given interpolation factor.
Get the norm of this quaternion.
Definition at line 205 of file quaternion.h.
Multiply by scalar.
Definition at line 148 of file quaternion.h.
Multiply this quaternion by another.
Definition at line 127 of file quaternion.h.
Add quaternion to this one.
Definition at line 92 of file quaternion.h.
Subtract quaternion from this one.
Definition at line 106 of file quaternion.h.
Divide by scalar.
Definition at line 171 of file quaternion.h.
Rotate vector by quaternion.
Definition at line 223 of file quaternion.h.
Set the components.
Definition at line 70 of file quaternion.h.
Set a quaternion using axis-angle representation.
- Parameters:
-
Definition at line 238 of file quaternion.h.
Set quaternion using Euler angles X, Y, Z, expressed in radians.
Set quaternion to identity rotation.
Definition at line 79 199 215 of file quaternion.h.
Friends And Related Function Documentation
Multiply by scalar.
Definition at line 142 of file quaternion.h.
Multiply by scalar.
Definition at line 136 of file quaternion.h.
Multiply two quaternions, Grassmann product.
Definition at line 119 of file quaternion.h.
Add two quaternions.
Definition at line 85 of file quaternion.h.
Get the negative quaternion (unary minus).
Definition at line 113 of file quaternion.h.
Subtract two quaternions.
Definition at line 99 of file quaternion.h.
Divide by scalar.
Definition at line 164 of file quaternion.h.
Divide by scalar.
Definition at line 157 of file quaternion.h.
Member Data Documentation
x, y and z components of the quaternion
Definition at line 311 of file quaternion.h.
w component of the quaternion
Definition at line 314 of file quaternion.h.
The documentation for this class was generated from the following file:
- csgeom/quaternion.h
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/new0/classcsQuaternion.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
0
Hey guys,
I think I have this mostly figured out but i'm still getting some errors, I'm not sure if it's a syntax thing or if I called out a variable wrong.
For some reason eclipse is saying that my variable [B]average[/B] "may not have locally been declared" which doesn't make sense to me, because it is, or so I think. If you could take a look and give me an idea of what is cause this I would really appreciate it!
import java.util.Random; public class lab11a_killackey { public static void main(String args[]) { Random randomNumbers = new Random(); int a[]= new int [ 1000 ]; a [1000] = 1 + randomNumbers.nextInt( 51 ); int big = -1; int small = 52; int n; for (int i=0; i<1000; i++) { if (a[i]> big) big = a[i]; } for (int i=0; i<1000; i++) { if (a[i] < small ) small = a[i]; } int average; for (int i= 0; i<1000; i++) [B]average[/B] += (a[i])/1000; int ans; ans= countItems(a, [B]average[/B]); int ansb; ansb = countItemsb(a, [B]average[/B]); System.out.printf("The largest of the 1000 integers is: %d", big); System.out.printf("The smallest of the 1000 integers is: %d", small); System.out.printf("The[B] average[/B] of the 1000 integers is: %d", average); System.out.printf("The number of integers below average is: %d", ansb); System.out.printf("The number of integers above average is: %d",ans); } public static int countItems( int a [], int average) { int cnt = 0; for (int i= 0; i<1000; i++) if (a[i] > average) cnt ++; return cnt; } public static int countItemsb(int a[], int average) { int cntb = 0; for (int i=0; i<1000; i++) if(a[i] < average) cntb++; return cntb; } }
Edited by mike_2000_17: Fixed formatting
|
https://www.daniweb.com/programming/software-development/threads/185318/array-method-problem
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
.2, you should use:
#include <hwloc.h>
#if HWLOC_API_VERSION >= 0x00010200
...
#endif
One of the major changes in hwloc 1.1 hwloc_cpuset_alloc, you should use hwloc_bitmap_alloc instead and add the following code to one of your common headers:
hwloc_cpuset_alloc
hwloc_bitmap_alloc
#include <hwloc.h>
#if HWLOC_API_VERSION < 0x00010100
#define hwloc_bitmap_alloc hwloc_cpuset_alloc
#endif
Similarly, the hwloc 1.0 interface may be detected by comparing HWLOC_API_VERSION with 0x00010000.
HWLOC_API_VERSION
0x00010000
hwloc 0.9 did not define any HWLOC_API_VERSION but this very old release probably does not deserve support from your application anymore.
|
https://www.open-mpi.org/projects/hwloc/doc/v1.2.2/faq.php
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
A common scenario in android development is being able to add a custom activity and navigate to it.
There are also several very popular libraries that requires this functionality. See for example the native YouTube SDKs. Here is a snippet from their documentation:
A view for displaying YouTube videos. Using this View directly is an alternative to using the YouTubePlayerFragment. If you choose to use this view directly, your activity needs to extend YouTubeBaseActivity.
The way you add a custom Activity or a class in native android application, is to declare it in the AndroidManifest.xml file, as well as to create the declared class. The same goes not just for activities, but for any application component.
YouTubePlayerFragment
YouTubeBaseActivity
Prior to the 2.0 release in NativeScript you couldn’t declare custom JavaScript classes in the AndroidManifest.xml.
Thanks to the SBG, now you can have custom Java classes and all application components, declared in the AndroidManifest.xml file. That enables the full native functionality already present in android. To make things clearer, let’s give a common example
Step1 - declare the Activity in the AndroidManifest.xml file
As we saw the Activity is an android class that needs to be declared in the AndroidManifest.xml, so let’s do that first.
Go to app/App_Resources/Android/AndroidManifest.xml and add an activity tag, so the xml file look something like this:
file: "app/App_Resources/Android/AndroidManifest.xml"
<
manifest
…>
application
… >
activity
android:name=”com.tns.MyActivity” />
</
>
NOTE: Notice the “MyActivity” class we just declared needs to be put in at least one namespace, or else it won’t be recognized as a custom class.
Step2: Provide the actual class implementation
Now that we have declared the custom activity in the manifest we need to provide the actual class implementation.
Create that custom js class that will contain the logic in the declared activity: app/MyActivity.js.
NOTE: The JavaScript file can be named any way you like, it is necessary however to place it somewhere inside the "app/" folder. I name it the same way, as the declared class in the manifest only for the sake of consistency.
A minimal implementation of this file looks like:
Javascript file: "app/MyActivity.js"
android.app.Activity.extend(
"com.tns.MyActivity"
, {
onCreate:
function
(bundle) {
this
.
super
.onCreate(bundle);
}
});
TypeScript file: "app/MyActivity.ts"
@JavaProxy(
)
class MyActivity extends android.app.Activity {
protected onCreate(bundle) {
};
NOTE: Notice that the activity we extend is a native activity - android.app.Activity. Here is the place to choose what native class we want to extend. We could also inherit android.support.v7.app.AppCompatActivity with the same result, and that’s a great feature we didn’t have until 2.0.
NOTE: The name of the extend and the activity name, declared in the AndroidManifest.xml, need to be the same: com.tns.MyActivity.
That’s it. You now control your activity like you would in a native android app, but through JavaScript only.
As mentioned some common application components that take advantage of the SBG are activities, applications and widgets. Here are some examples with common scenarios that take advantage of the SBG:
The first example uses custom Application and Activity, in order to enable multidex support
The second example is an implemented android widget.
NOTE: Both examples follow the recommended android way of enabling widgets and multidex support.
There is a bonus that comes with the SBG, performance-wise. There is a significant boost on the initial loading time of the application. Here are the numbers for the default NativeScript app, ran on Nexus 5.
Project structure after build:
+--app/
| +--App_Resources/
| | +--Android/
| | +--AndroidManifest.xml (1)
| |
| +--...
| |--MyActivity.js (2)
+--...
+--platforms/
+--android/
+--src/
+--main/
+--assets/
| +--app/
| +--...
|
+--java/ (2)
+--com
+--tns
+--MyActivity.java (3)
AndroidManifest.xml file you can edit: add activities, application and other application components
The JavaScript class you declare in the AndroidManifest.xml
The file generated by the SBG, that serves as a proxy for your JavaScript class MyActivity.js
On build time the JavaScript files you created will be analyzed and a java proxy will be created for "app/MyActivity.js", it will also be generated where you specified: com/tns/MyActivity.java
The build will take care of compiling the generated java file com/tns/MyActivity.java (3), and packing it into the application.
On install time the AndroidManifest.xml, which you edited, will be able to use the newly generated and compiled MyActivity.class file and will invoke its onCreate method, which in turn will call your JavaScript implementation file /app/MyActivity.js and its overridden onCreate method.
Thanks to the SBG you can now have automatically generated JavaScript proxies, in the form of Java classes, that can be declared in the AndroidManifest.xml file.
Now anyone keen to implement a YouTube player plugin!
|
https://www.nativescript.org/blog/static-binding-generator---what-is-it-good-for
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Many author do not include topics about returning array from a functions in their books or articles. It is because, in most of the cases, there is no need of array to be returned from a function. Since, on passing array by its name, the address of its first member is passed and any changes made on its formal arguments reflects on actual arguments. But sometimes, there may arise the situation, where an array have to be returned from a function, for example, multiplying two matrices and assigning the result to another matrix. This can be done by creating a pointer to an array. The source code of returning two dimensional array with reference to matrix addition is given here.
#include <stdio.h>int (*(Matrix_sum)(int matrix1[][3], int matrix2[][3]))[3]{int i, j;for(i = 0; i < 3; i++){for(j = 0; j < 3; j++){matrix1[i][j] = matrix1[i][j] + matrix2[i][j];}}return matrix1;}int main(){int x[3][3], y[3][3];int (*a)[3]; //pointer to an arrayint i,j;printf("Enter the matrix1: \n");for(i = 0; i < 3; i++){for(j = 0; j < 3; j++){scanf("%d",&x[i][j]);}}printf("Enter the matrix2: \n");for(i = 0; i < 3; i++){for(j = 0; j < 3; j++){scanf("%d",&y[i][j]);}}a = Matrix_sum(x,y); //asigningprintf("The sum of the matrix is: \n");for(i = 0; i < 3; i++){for(j = 0; j < 3; j++){printf("%d",a[i][j]);printf("\t");}printf("\n");}return 0;}
excellent, very insightful.
great!!
Thank you.
Awesome Code, Helped a lot, take away a lot of headache. Thanks
thank u...it helped me a lot...
thx brother.
|
http://www.programming-techniques.com/2011/08/returning-two-dimensional-array-from.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Mapping in each case. While Google does not provide specific mapping controls for Silverlight/WPF, it's still quite straightforward to integrate the browser version, once you know where to start.
Overview
Microsoft provide Bing mapping controls for Silverlight, Windows Phone 7, WPF, with a very similar API (and by very similar I mean in most cases there is just a different namespace). There is also an AJAX version, but there's no real reason to use that here.
Google provide a JavaScript API which can be used by hosting in a browser control, which works well in WPF, WP7 and Silverlight out-of-browser; in Silverlight in the browser, hosting HTML content is not possible, so I'll demonstrate overlaying HTML on top of the Silverlight app. For WPF (or WinForms) applications there's also the option of embedding the Google Maps Flash control, but I'll stick with the web version as it spans these different platforms.
In both cases there are "reasonable use" limits for free use of the service, which seem to me quite... reasonable. In the case of Google Maps there is a restriction around using the service for free if your app is not also free, and it's not clear to me exactly what this means, while Bing does not impose such a restriction, which might be relevant if you're publishing a WP7 app.
Data
As a data source I'll import data in the GPX format (an XML format for exchange of GPS data). I'll just pull out a sequence of locations from this, which we can then plot on the map. GPX files contain two types of locations, "track points" (part of a "track" which is meant to be logged GPS data), and "way points" (which are more meant to be part of a plotted course or significant locations), here we'll just play dumb and extract everything in a flattened format.
I'm just embedding a few GPX files in the app resources for this example. I've chosen a few running routes which you can see in the examples below, no prizes will be awarded for guessing which one I logged myself.
private void LoadTrack(string track) { using (var streamReader = new StreamReader(StreamForTrack(track))) { XDocument doc = XDocument.Parse(streamReader.ReadToEnd()); var ns = doc.Root.GetDefaultNamespace(); Locations = doc.Descendants() .Where(el => el.Name == ns + "wpt" || el.Name == ns + "trkpt") .Select(trkpt => new Location { Latitude = double.Parse(trkpt.Attribute("lat").Value), Longitude = double.Parse(trkpt.Attribute("lon").Value) }); .ToLocationCollection(); }; } /// Work around annoying LocationCollection - which contains no AddRange method or collection constructor public static LocationCollection ToLocationCollection(this IEnumerable<Location> locations) { var locs = new LocationCollection(); foreach (var loc in locations) { locs.Add(loc); } return locs; }
Lastly I'll expose the available tracks as a property
AllTracks and the selected track as
Track:
private string _track; public string Track { get { return _track;} set { if (value == _track) return; _track = value; LoadTrack(_track); OnPropertyChanged("Track"); } }
Bing - Silverlight
It's refreshingly simple to create a map to display our route. Setting the DataContext to the class described above, the following XAML displays a map of the route:
<!--Need to set CredentialsProvider=...--> <Grid> <Grid.DataContext> <local:GpxPath /> </Grid.DataContext> <bing:Map x: <bing:MapPolyline </bing:Map> </Grid>
On top of this it's nice to actually show the relevant part of the map, so lets set the view based on the bounding rectangle of the given locations. I wanted to do this in XAML, but unfortunately this is a method on the Map class. First step, wrap this in an attached property:
public static class Mapping { // ... public static readonly DependencyProperty ViewProperty = DependencyProperty.RegisterAttached("View", typeof(LocationRect), typeof(Mapping), new PropertyMetadata(OnViewPropertyChanged)); private static void OnViewPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { Map map = d as Map; var rect = e.NewValue as LocationRect; if (map != null && rect != null) { map.SetView(rect); } } }
Then we can make a converter to get the bounding LocationRect of our Locations collection:
public class LocationsViewConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { var locs = value as IEnumerable<Location>; if (locs == null || !locs.Any()) { return null; } return new LocationRect(locs.Max(l => l.Latitude), locs.Min(l => l.Longitude), // N, W locs.Min(l => l.Latitude), locs.Max(l => l.Longitude)); // S, E } // ... }
It's quite nice to see pushpins for the route start/end, so lets add them. Just as MapPolyline is analogous to Polyline, MapItemsControl is to ItemsControl. So I can add pushpins by binding to the appropriate collection. Add a property and update it when loading a new track:
public IEnumerable<Location> PushPins { get; set; } // Really add property changed notification // ... PushPins = new List<Location> { locations.First(), locations.Last() };
Here's the updated XAML with the view setting and push pins:
<bing:Map <bing:MapLayer> <bing:MapPolyline </bing:MapLayer> <bing:MapItemsControl <bing:MapItemsControl.ItemTemplate> <DataTemplate> <bing:Pushpin </DataTemplate> </bing:MapItemsControl.ItemTemplate> </bing:MapItemsControl> </bing:Map>
And the end result:
Bing - WPF
The WPF Bing control follows the Silverlight control API very closely, and other than the namespace there's little difference. Some options are different for the surrounding UI, but the main API is the same. It literally is as simple as using the same code files with:
#if SILVERLIGHT using Microsoft.Maps.MapControl; #else using Microsoft.Maps.MapControl.WPF; #endif
The XAML is very similar - see the project at the end of the post for other details.
Bing - WP7
Again the WP7 API is similar, although in this case there are some differences. An obvious one is the Location class is supplemented with the Geolocation class - for example LocationCollection is now an ObservableCollection<Geolocation>. In fact there are implicit conversions between Location and Geolocation, so in most cases we can again get away with the same C# code with some careful use of types - other than some conditional includes:
#if WINDOWS_PHONE using Microsoft.Phone.Controls.Maps; using Microsoft.Phone.Controls.Maps.Platform; #elif SILVERLIGHT using Microsoft.Maps.MapControl; #else using Microsoft.Maps.MapControl.WPF; #endif
Of course the end result looks somewhat different:
And finally
The various projects with the above code can be downloaded here. If I was going to create the same thing again I'd keep references to Bing namespaces entirely out of the common code with some more aggressive use of converters, rather than dancing around the differences, but as an exploration of the APIs I found it interesting to see how little had to change.
I'm going to pause for there and show the Google Maps version in another installment. There there will be some more interesting issues to consider.
|
http://blog.scottlogic.com/2012/05/02/mapping-in-wpf-silverlight-and-wp7.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Building Collaboration Applications That Mix Web Services Hosted Content with P2P Protocols 1
- Evan Potter
- 1 years ago
- Views:
Transcription
1 Building Collaboration Applications That Mix Web Services Hosted Content with P2P Protocols 1 Ken Birman, Jared Cantwell, Daniel Freedman, Qi Huang, Petko Nikolov, Krzysztof Ostrowski Dept of Computer Science Cornell University Ithaca, NY, USA {ken, dfreedman, qhuang, {jmc279, The most commonly deployed web service applications employ client-server communication patterns, with clients running remotely and services hosted in data centers. In this paper, we make the case for Service-Oriented Collaboration applications that combine service-hosted data with collaboration features implemented using peer-to-peer protocols. Collaboration features are awkward to support solely based on the existing web services technologies. Indirection through the data center introduces high latencies and limits scalability, and precludes collaboration between clients connected to one-another but lacking connectivity to the data center. Cornell s Live Distributed Objects platform combines web services with direct peer-to-peer communication to eliminate these issues. I. INTRODUCTION There is a growing opportunity to use Service-Oriented Collaboration Applications in ways that can slash healthcare costs, improve productivity, permit more effective search and rescue after a disaster, enable a more nimble information-enabled military, or make possible a world of professional dialog and collaboration without travel. Collaboration applications will need to combine two types of content: traditional web service hosted content, such as data from databases, image repositories, patient records, and weather prediction systems, with a variety of collaboration features, such as chat windows, white boards, peer-to-peer video and other media streams, and replication/coordination mechanisms. Existing web service technologies make it easy to build applications in which all data travels through a data center. Implementing collaboration features using these technologies is problematic because collaborative applications can generate high, bursty update rates and yet often require low latencies and tight synchronization between collaborating users. One can often achieve better performance using direct client-to-client (also called peerto-peer, or P2P) communication, but in today s SOA platforms, side-band communication is hard to integrate with hosted content. This problem is reflected by a growing number of publications on the integration of web services with peer-to-peer platforms, e.g., [2], [4], [8], [9], [10], [14], [15], [16], [20], [21]. Yet the issue remains unresolved (see Section VII for more details). Cornell s Live Distributed Objects platform [12] (Live Objects for short) allows even a non-programmer to construct content-rich solutions that blend traditional web services and peer-to-peer technologies, and to share them with others. This is like creating a slide show: drag-anddrop, after which the solution can be shared in a file or via and opened on other machines. The users are immersed in the resulting collaborative application: they can interact with the application and peers see the results instantly. Updates are applied to all replicas in a consistent manner. Moreover, in contrast to today s web service platforms, P2P communication can coexist with more standard solutions that reach back to the hosted content and trigger updates at the associated data centers. Thus, when an application needs high data rates, low latency, or special security, it can use protocols that bypass the data center to achieve the full performance of the network. This paper makes the following contributions. We describe a new class of Service-Oriented Collaboration applications that integrate service hosted content with peer-to-peer message streams. We analyze two example collaboration applications (search and rescue mission and virtual worlds), identifying shared characteristics. We list the key challenges that these kinds of applications place on their runtime environments. We describe a new class of multi-layered mashups and contrast them with more traditional, minibrowser-based approach to building mashups, characteristic of today s web development. We discuss the advantages of decoupling transport and information layers as a means of achieving reusability, customizability, ability to rapidly deploy collaboration applications in new environments and adapt them dynamically to the changing needs. We discuss the resulting objectoriented perspective, in which instances of distributed communication protocols are modeled uniformly as objects similar to those in Java,.NET, COM or Smalltalk. We present our Live Distributed Objects platform: an example of a technology that fits well with the 1 This work was supported, in part, by the NSF, AFRL, Intel and Cisco. Qi Huang is a visiting scientist from the School of Computer Sci & Tech; Huazhong University of Sci & Tech, supported by the Chinese NSFC, grant
2 layered, componentized model derived through our analysis. We compare performance of hosted Enterprise Service Bus (ESB) solutions with peer-to-peer communication protocols as an underlying communication substrate for service oriented collaboration. Rather than finding a clear winner, we identify relative strengths of each of the solutions tested. We see this as a further justification for the decoupling of information and transport layers advocated above: instead of a onesize fits all approach, an application can pick and choose among a menu of interchangeable components specialized for different environments. II. LIMITATIONS OF THE EXISTING MODEL There are two important reasons why integrating peerto-peer collaboration with server-hosted content is difficult. The first is not strictly limited to collaboration and peer-to-peer protocols; rather, it is a general weakness of the current web mashup technologies that makes it hard to seamlessly integrate data from several different sources.the web developers community has slowly converged towards service platforms that export autonomous interactive components to their clients, in the form of what we ll call minibrowser interfaces. A minibrowser is an interactive web page with embedded script, developed using AJAX, Silverlight, Caja, or similar technology, optimized for displaying a single type of content, for example interactive maps from Google Earth or Virtual Earth. The embedded script is often tightly integrated with backend services in the data center, making it awkward to access the underlying services directly from a different script or a standalone client. As a result, the only way such services can be mashed up with other web content is by either having the data center compute the mashup (so that it can be accessed via the minibrowser), or by embedding the entire minibrowser window in a web page. But an embedded minibrowser can t seamlessly blend with the surrounding content; it is like a standalone browser within its own frame, and runs independent of the rest of the page. To illustrate this point, consider Fig. 1 and Fig. 2. The figures are screenshots of web applications, with content from multiple sources mashed-up together. Fig. 1 was constructed using a standard web services approach, pulling content from the Yahoo! maps and weather web services and assembling it into a web page as a set of tiled frames. Each frame is a minibrowser with its own interactive controls, and comes from a single content source. To illustrate one of the many restrictions: if the user pans or zooms in the map frame, the associated map will shift or zoom, but the other frames remain as they were the frames are not synchronized. Now consider Fig. 2. Here we see a similar application constructed using Live Objects. In this case, content from different sources is overlaid in the same window and synchronized. We used white backgrounds to highlight the contributions of different sources, but there are no frame boundaries: elements of this mashup (which can include map layers, tables showing buildings or points of interest, icons representing severe weather reports, vehicles or individuals, etc.) co-exist layers within which the end user can easily navigate. Data can come from many kinds of data centers. Our example actually overlays weather from Google on terrain maps from Microsoft s Virtual Earth platform and extracts census data from the US Census Bureau: the lion coexists with the lamb. The second problem is that with the traditional style of web development, content is assumed to be fetched from a server, either directly over HTTP, or by interacting with a web service. Web pages downloaded by clients browsers Figure 1. Standard Minibrowser-Style Mashup Figure 2. Live Objects Multi-Layered Mashup
3 contain embedded addresses of specific servers. Client-toclient traffic routes through a data center. In contrast, Live Objects allow visual content and update events to be communicated using any sort of protocol, including client-server, but also overlay multicast, peer-to-peer replication, even a custom protocol designed by the content provider. As noted earlier, this makes it possible to achieve extremely high levels of throughput and latency. It also enhances security: the data center server can t see data exchanged directly between peers. The above discussion motivates our problem statement: Allow web applications to overlay content from multiple sources in a layered fashion, such that the distinct content layers share a single view and remain well synchronized: zooming, rotating, or panning should cause all layers to respond simultaneously, and an update in any of the layers should be reflected in all other layers. Allow updates to be carried by the protocol best matched to the setting in which the application is used. As noted earlier, the solutions discussed here are based on Live Objects. These support drag-and-drop application development. Of course, new types of components must be created for each type of content, but the existing collection of components provides access to several different types of web services hosted content (including all the examples given above). Once constructed, the resulting live application is stored as an XML file. The file can be moved about and even embedded in . Users that open it find themselves immersed into the application. Examples of transport protocols optimized for various settings include support for WAN networks with NATs and firewalls (SOLO [6]), low latency (Ricochet [1]), high throughput and very large numbers of nodes (QSM [11]), large numbers of irregularly overlapping multicast groups (Gossip Objects [3]), and strong reliability properties (Properties Framework [13]). III. SERVICE ORIENTED COLLABORATION Before saying more about our approach, we analyze an example collaboration application to expose the full range of needs and issues that arise. Consider a rescue mission coordinator: a police or fire chief coordinating teams who will enter a disaster zone in the wake of a catastrophe to help survivors, control dangerous situations (electrical wires down, chemical leaks, fires, etc.), and move supplies as needed. The coordinator, a non-programmer, would arrive on the scene, build a new collaboration tool, and distribute it to his/her team. Each team member would carry a tablet-style device with wireless communication capabilities. The application built by the coordinator would be installed on each team member s mobile device, and in the offices in mission headquarters. The coordinator would then deploy teams in the field. Our rescue workers now use the solution to coordinate and prioritize actions, inform each other of the evolving situation, steer clear of hazards, etc. As new events occur, the situational status would evolve, and the team member who causes or observes these status changes would need to report them to the others. For example, removing debris blocking access to a building may enable the team to check it for victims, and fire that breaks out in a chemical storage warehouse may force diversion of resources. As rescue workers capture information, their mobile devices send updates that must be propagated in real-time. Having defined the scenario, now let s analyze in more detail the requirements it places on our collaboration tool. First, note that, the collaboration tool pulls data from many kinds of sources. It makes far more sense to imagine that weather information, maps, traffic, sensors data, positions of units, buildings, messages and alerts come from a dozen providers than to assume that one organization would be hosting services with everything we need in one place. Data from distinct sources could have different format and one will often need to interface to each using its own protocols and interfaces. Second, as conditions evolve the team might need to be modify the application, for example adding new types of information, changing the way it is represented, or even modifying the way team members communicate (for example, if reach-back network links fail). Whereas a minibrowser would typically be prebuilt with all the available features in place, our scenario demands a much more flexible kind of tool that can be redesigned while in use. Third, depending on the location and other factors, the best networking protocols and connectivity options may vary. In our rescue scenario, the workers may have to use wireless P2P protocols much of the time, reaching back to hosted services only intermittently when a drone aircraft passes within radio range. More broadly, the right choice of protocol should reflect the operating conditions, and if these change, the platform should be capable of swapping in a different protocol without disrupting the end user. This argues for a decoupling of functionality. Whereas a minibrowser packages it all into one object, better is a design in which the presentation object is distinct from objects representing information sources and objects representing transport protocols. Decoupling makes it possible to dynamically modify or even replace a component with some other (compatible) option when changing conditions require it. We have posed what may sound like a very specialized problem, but in fact we see this as a good example of a more general kind of need that could arise in many kinds of settings. For example, consider a physician treating a patient with a complex condition, who needs collaboration help from specialists, and who might even be working in a remote location under conditions demanding urgent action. The mixture of patient data, telemetry, image studies, etc., may be just as rich and dynamic as in our search and rescue scenario, and the underlying communication options equally heterogeneous and unpredictable. A minibrowser pre-designed for a wired environment might
4 perform poorly or fail under such conditions. With Live Objects, if there is a way to solve the problem, there is a way to build the desired mashup. Throughout the above we noted requirements; for clarity, we now summarize them below. As noted, these needs are seen in many settings. Indeed, we believe them to be typical of most collaboration applications. We would like to enable a non-programmer to rapidly develop a new collaborative application by composing together and customizing preexisting components. We would like to be able to overlay data from multiple sources, potentially in different formats, obtained using different protocols and inconsistent interfaces. We would like to be able to dynamically customize the application at runtime, e.g., by incorporating new data sources or changing the way data is presented, during a mission, and without disrupting system operation. We would like to be able to accommodate new types of data sources, new formats or protocols that we may not have anticipated at the time the system was released. Data might be published by the individual users, and it might be necessary for the users to exchange their data without access to a centralized repository. Data may be obtained using different types of network protocols, and the type of the physical network or protocols may not be known in advance; it should be possible to rapidly compose the application using whatever communication infrastructure is currently available. Users may be mobile or temporarily disconnected, infrastructure may fail, and the topology of the network and its characteristics might change over time. The system should be easily reconfigurable. The requirements outlined above might seem hard to satisfy, but in fact, the solution is surprisingly simple. Our analysis motivates a component-oriented architecture, in which the web services and hosted content are modeled as reusable overlayed information layers backed by customizable transport layers: a graph of components. A collaborative application is a forest: a set of such graphs. Our vision demands a new kind of collaboration standard, in order to facilitate the side-by-side coexistence of components that might today be implemented as proprietary minibrowsers: if we enable components to talk to one-another, we need to agree on the events and representation that the dialog will employ. The decoupling of functionality into layers also suggests a need for a standardized layering: in the examples above, one can identify at least four (the visualization layer, the linkage layer that talks to the underlying data source, the update generating and interpreting layer, and the transport protocol). We propose that this decoupling be done using event-based interfaces; a natural way of thinking about components that dates back to Smalltalk. Thus, rather than having the data center developer offer content through proprietary minibrowser interface, he/she would define an event-based interface between transport and information layers; the visual events delivered by the transport could then be delivered to an information layer responsible for visualizing them. It, in turn, would capture end-user mouse and keyboard events and pass them down, also as events. With this type of event-based decoupling, either layer could easily be replaced with a different one. In this perspective, distributed peer-to-peer protocols would also be encapsulated within their respective transport layers. Thus, for example, one version of a transport layer could fetch data directly from a server in a data center, whereas a different version might use a peerto-peer dissemination architecture, a reliable multicast protocol; it could leverage different type of hardware or be optimized for different types of workloads. Provided that the different versions of the transport layer conform to the same standardized event-based interfaces, the application could then switch between them as conditions demand. In this event-oriented world, end-users interact through Live Objects that transform actions into updates that are communicated in the form of events that are shared via the transport layer. The protocol implemented by the transport layer might replicate the event, deliver it to the tablets of our rescue workers, and report it through the event-based interface back to the information layer at which the event has originated. Thus, the transport layer with the embedded distributed protocol would behave very much like an object in Smalltalk: it would consume events and respond with events. This motivates thinking about communication protocols as objects, and indeed in treating them as objects much as we treat any other kind of object in a language like Java or in a runtime environment like Jini or.net. Doing so unifies apparently distinct approaches. Just as a remotely hosted form of content such as a map or an image of a raincloud can be modeled as an object, so can network protocols be treated as objects. Some P2P systems try to make everything a P2P interaction. But in the examples we ve seen, several kinds of content would more naturally be hosted: maps and 3-D images of terrain and buildings, weather information, patient health records, etc. On the other hand, collaboration applications are likely to embody quite a range of P2P event streams: each separate video object, GPS source, sensor, etc, may have its own associated update stream. If one thinks of these as topics in publishsubscribe eventing systems, an application could have many such topics, and the application instance running on a given user s machine could simultaneously display data from several topics. We have previously said that we d like to think of protocols as objects. It now becomes clear that further precision is needed: the objects aren t merely protocols, but in fact are individual protocol instances. Our system will need to simultaneously support potentially large numbers of transport objects running concurrently in
5 the end-user s system, in support of a variety of applications and uses. All of this leads to new challenges. The obvious one was mentioned earlier: today s web services don t support P2P communication. Contemporary web services solutions presume a client-server style of interaction, with data relayed through a message-oriented middleware broker. Even if clients are connected to one-another, if they lose connectivity to the broker, they can t collaborate. Another serious issue arises if the clients don t trust the data center: sensitive data will need to be encrypted. The problem here is that web services security standards tend to trust the web services platform itself. The standards offer no help at all if we need to provide end-to-end encryption mechanisms while also preventing the hosted services from seeing the keys. Finally, we encounter debilitating latency and throughput issues: hosted services will be performancelimiting bottlenecks when used in settings with large numbers of clients, as we will see in our experimental section. We are left with a mixture of good and bad news: Web services standardize client access to hosted services and data: we can easily build some form of multi-framed web page that could host each kind of information in its own minibrowser. When connectivity is adequate, relaying data via a hosted service has many of the benefits of a publish-subscribe architecture, such as robustness as the set of clients changes. The natural way to think of our application is as an object-oriented mashup, but web services provide no support for this kind of client application development. Our solution may perform very poorly, or fail if the hosted services are inaccessible. All data will probably be visible to the hosted services unless the developer uses some sort of non-standard end-to-end cryptography. IV. USING LIVE OBJECTS FOR COLLABORATION Cornell s Live Objects platform supports componentized, layered mashup creation and sharing, and overcomes limitations of existing web technologies. We ve used it to construct a number of service oriented collaboration applications, some of which are quite sophisticated, including a solution to the search and rescue problem stated in Section 3. The major design aspects are as follows: The developer starts by creating (or gaining access to) a collection of components. Each component is an object that supports live functionality, and exposes event-based interfaces by which it interacts with other components. Examples include: Components representing hosted content Sensors and actuators Renderers that graphically depict events Replication protocols Synchronization protocols Folders containing sets of objects Display interfaces that visualize folders. Mashups of components are represented as a kind of XML web pages; each describing a recipe for obtaining and parameterizing components that will serve as layers of the composed mashup. We call such an XML page a live object reference. References can be distributed as files, over , HTTP or other means. The application is created by building a forest consisting of graphs of references that are mashed together. At design time, an automated tool lets the developer drag and drop to combine references for individual objects into an XML mashup of references describing a graph of objects. The platform type-checks mashups to verify that they compose correctly. For example, a 3-D visualization of an airplane may need to be connected to a source of GPS and other orientation data, which in turn needs to run over a data replication protocol with specific reliability, ordering or security properties. When activated on a user s machine, an XML mashup yields a graph of interconnected proxies. A proxy is a piece of running code that may render, decode, or transform visual content, encapsulate a protocol stack, and so on. Each subcomponent in the XML mashup produces an associated proxy. The hierarchy of proxies reflects the hierarchical structure of the XML mashup. If needed, an object proxy can initialize itself by copying the state from some active proxy (our platform assists with this sort of state transfer). The object proxies then become active ( live ), for example by relaying events from sensors into a replication channel, or by receiving events and reacting to them (e.g. by redisplaying an aircraft). Our approach shares certain similarities with the existing web development model, in the sense that it uses hierarchical XML documents to define the content. On the other hand, we depart from some of the de-facto stylistic standards that have emerged. For example if one pulls a minibrowser from Google Earth, it expects to interact directly with the end user, and includes embedded JavaScript that handles such interactions. In Live Objects, the same functionality would be represented as a mashup of a component that fetches maps and similar content with a second component that provides the visualization interface. Although the term mashup may sound static, in the sense of having its components predetermined, this is not necessarily the case. One kind of live object could be a folder including a set of objects, for example extracted from a directory in a file system or pulled from a database in response to a query. When the folder contents change,
6 the mashup is dynamically updated, as might occur when a rescue worker enters a building or turns a corner. Thus, Live Objects can easily support applications that dynamically recompute the set of visible objects, as a function of location and orientation, and dynamically add or remove them from the mashup. A rescuer would automatically and instantly be shown the avatars of others who are already working at that site, and be able to participate in conference-style or point-to-point dialog with them, through chat objects that run over multicast protocol objects. This model can support a wide variety of collaboration and coordination paradigms. In summary, the Live Objects platform makes it easy for a non-programmer to create the needed application. The rescue coordinator pulls prebuilt object references from a folder, each corresponding to a desired kind of information. Hosted data, such as weather, terrain maps, etc, would correspond to objects that point to a web service over the network. Peer-to-peer objects would implement chat windows, shared white boards, etc. Event interfaces allow such objects to coexist in a shared display window that can pan, zoom, jump to new locations, etc. The relative advantages and disadvantages of our model can be summarized as follows: Like other modern web development tools, our platform supports drag-and-drop style of development, permitting fast, easy creation of content-rich mashups. The resulting solutions are easy to share. By selecting appropriate transport layers, functionality such as coordination between searchers can remain active even if connectivity to the data center is disrupted. Streams of video or sensor data can travel directly and won t be delayed by the need to ricochet off a remote and potentially inaccessible server. New event-based interoperability standards are needed. Lacking them, we could lose access to some of the sophisticated proprietary interactive functionality optimized for proprietary minibrowser-based solutions with an embedded JavaScript. Direct peer-to-peer communication can be much harder to use than relaying data through a hosted service that uses an Enterprise Service Bus (ESB) model. Furthermore, the lack of a one size fits all publish-subscribe substrate forces the developers to become familiar with and choose between a range of different and incompatible options. An wrong choice of transport could result in degraded QoS, inferior scalability, or even data loss. V. SECOND LIFE SCENARIO Up to now, we have focused on a small-scale example. But our longer term goal is to support a large-scale nextgeneration collaboration system similar to Second Life, a virtual reality immersion system created by Linden Labs. Today s hosted Second Life system runs on a data center consisting of a large number of servers storing the state of the virtual world, the locations of all users, etc. Users (represented by avatars) customize the environment, then move about and interact with others. For example, one can create a cybercafé, customize its music, furniture, wall treatments, etc. As other Second Life users enter the room, they can interact with the environment and oneanother. In the Second Life architecture, whenever an avatar moves or performs some action in the virtual world, a request describing this event is passed to the hosting data center and processed by servers running there. When the number of users in a scenario isn t huge, Second Life can keep up using a standard workload partitioning scheme in which different servers handle different portions of the virtual world. However, when loads increase, for example because large numbers of users want to enter the same virtual discotheque, the servers can become overwhelmed and are forced to reject some of the users or reduce their frame-rendering rates and resolution. Under such conditions, Second Life might seem jumpy and unrealistic. In our lab at Cornell we re using Live Objects to build our own version of Second Life, in which some content will still be hosted, but many kinds of client-to-client communication will flow directly. This form of Live Objects applications poses new but not insurmountable challenges. On the one hand, many aspects of the application can be addressed in the same manner we ve outlined for the search and rescue application. One could use Microsoft Virtual Earth, or Google Earth, as a source of 3D textures representing landscapes, buildings, etc. The built-in standards for creating mashups could be used to identify sensors and other data sources, which could then be wrapped as Live Objects and incorporated into live scenes. On top of this, streaming media sources such as video cameras mounted at street level in places such as Tokyo s Ginza can be added to create realistic experience. The more complex issue is that a search and rescue application can be imagined as a situational state fully replicated across all of its users. In this model, all machines would see all the state updates (even if the user is zoomed into some particular spot within the overall scene). One can contemplate such an approach because the aggregate amount of information might not be that large. In contrast, Second Life conceptually is a whole universe, unbounded in size and hence with different users in very distinct parts of the space. It would make no sense for every user to see every event. To solve this problem within our Live Objects platform, we built a simple database that can be queried at low cost. Each user sees only the objects within some range, or within line of sight. As a user moves about, the platform recomputes the query result, and then updates the display accordingly. Of course this basic mechanism isn t the whole story, but given the brevity constraints on the current paper, it isn t possible to provide all the details. Instead, let s ask how a solution such as the one we ve
7 mocked up at Cornell contrasts with the more standard version of Second Life: a hosted platform that exports a minibrowser.. Consider a 3D texture representing terrain in some region: 1. In a minibrowser approach, the minibrowser generates the texture from hosted data (say, a map) and displays it. This model makes it difficult (not impossible) to superimpose other content over the texture; generally, we would need to rely on a hosting system s mashup technology to do this. For example, if we wanted to blend weather information from the National Hurricane Center with a Google Map, the Google map service would need to explicitly support this sort of embedding. 2. In our Second Life scenario, the visible portion of the scene the part of the texture being displayed will often be controlled by events generated by other Live Objects that share the display window, perhaps under control of users running on machines elsewhere in the network. These remote sources won t fit into the interaction model expected by the minibrowser. 3. The size and shape of the display window and other elements of the runtime environment should be inherited from the hierarchy structure of the object mashup used to create the application. Thus our texture should learn its size and orientation and even the GPS coordinates on which to center from the parent object that hosts it, and similarly until we reach the root object hosting the display window. A minibrowser isn t a component: it runs the show. On the other hand, minibrowsers retain one potential advantage. Since all aspects of the view are optimized to run together, the interaction controls might be far more sophisticated and perform potentially much better than a solution resulting from mashing up together multiple layers developed independently. Furthermore, in many realistic examples event-based interfaces could get fairly complex, and difficult for most developers to work with. This observation highlights the importance of developing component interface and event standards for the layered architecture we ve outlined. The task isn t really all that daunting: the designers of Microsoft s Object Linking and Embedding (OLE) standard faced similar challenges, and today, their OLE interfaces are pervasively used to support thousands of plugins that implement context menus, virtual folders and various namespace extensions, and drag and drop technologies. Lacking the needed standards, we ve compromised: the Live Objects platform supports both options today. In addition to allowing hosted content to be pulled in and exposed via event interfaces, components developed by some of our users also use embedded minibrowsers to gain access to a wide range of platforms, including Google, Yahoo, MSN, Flickr, YouTube, and FaceBook. VI. PERFORMANCE EVALUATION Central to our argument is the assertion that hosted event notification solutions scale poorly and stand as a barrier to collaboration applications, and that developers will want to combine hosted content with P2P protocols to overcome these problems. In this section we present data to support our claims. Some of the results (Fig. 3, Fig. 4) are drawn from a widely cited industry whitepaper ([7]) and were obtained using a testing methodology and setup developed and published by Sonic Software ([18]). The remainder was produced in our own experiments. The first graph (Fig. 3), from the industry white paper, analyzes the performance of several commercial Enterprise Service Bus (ESB) products. Shown is the maximum throughput (msgs/sec) for 1024 byte messages. The experiment varies the number of subscribers while using a single publisher that communicates through a single hosted message broker on a single topic. Brokers are configured for message durability: even if a subscriber experiences a transient loss of connectivity, the publisher retains and hence can replay all messages. As the number of subscribers increases, performance degrades sharply. Throughput (k msgs/s) Sub 10 Subs 25 Subs 50 Subs Throughput (k msgs/s) Sub 10 Subs 25 Subs 50 Subs 0 Fiorano MQ 2008 Tibco EMS Sonic MQ 7.0 Active MQ Jboss 1.4 Sun MQ Fiorano Tibco MQ 2008 EMS Sonic MQ 7.0 Active MQ Jboss 1.4 Sun MQ 4.1 Figure 3. Scalability of Commercial ESBs Figure 4. Scalability of Commercial ESBs
8 Although not shown, latency will also soar because the amount of time the broker needs to spend sending a single message increases linearly with the number of subscribers. In collaboration applications, durability is often not required. The second graph (Fig. 4) shows throughput in an experiment in which the publisher does not log data. Here, a disconnected subscriber would experience a loss. We find that while the maximum throughput is much higher, the degradation of performance is even more dramatic. One could improve scalability using clustered service structures, but such a step would have no impact on latency, since clients would still need to relay data through the data center. Our point is simply that hosted ESB solutions don t necessarily scale well, and that the client-to-data center communication path could introduce intolerable performance overheads. Next, we report on some experiments we conducted on our own at Cornell, focusing on scalability of event notification platforms that leverage peer-to-peer techniques for dissemination and recovery. On the first graph (Fig. 5), we compare the maximum throughput of two decentralized reliable multicast protocols, again with 1024-byte messages, a single topic and a single publisher. Unlike in the previous tests, which ran on 1Gbit/sec LANs, these experiments used a 100Mbit/sec LAN; this limits the peak performance to 10,000 messages/second. QSM [11] achieves stable high throughput (saturating the network). JGroups, a popular product, runs at about a fifth that speed, collapsing as the number of subscribers increases. Also, at small loss rates, latency in QSM is at the level of 10-15ms irrespectively of the number of subscribers. When the number of topics is varied, QSM maintains its high performance. On the second graph (Fig. 6), we report performance for 110 subscribers, but performance for other group sizes is similar. JGroups performance was higher with smaller group sizes, but erodes as the number of topics increases. JGroups failed when we attempted to configure it with more than 256 topics. Finally, we look at two scalable protocols under conditions of stress, with a focus on delivery latency (y axis) as a fixed message rate is spread over varying Throughput (k msgs/s) Group Size Jgroups QSM(1 sender) QSM(2 senders) Figure 5. Scalability of QSM and JGroups (throughput for various group sizes) numbers of topics. 64 subscribers each join some number of topics, a publisher sends data at a rate of 1000 messages/second, selecting the topic in which to send at random. Our experimental setup, on Emulab, injects a random 1% message loss rate. In Fig. 7 we see that Ricochet [1], a Cornell-developed protocol for low-latency multicast, maintains steady low-latency delivery (about 10ms; y-axis) as the number of topics increases to 1024 (xaxis). In contrast, latency soars when we repeat this with the industry-standard Scalable Reliable Multicast (SRM), widely used for event notification in their datacenters. As can be seen in the graph, SRM s recovery latency rises linearly in the number of topics, reaching almost 8 seconds with 128 groups. To summarize, our experiments confirm that: Hosted enterprise service bus architectures can achieve high levels of publish-subscribe performance for small numbers of subscribers, but performance degrades very sharply as the number of subscribers or topics grows. The JGroups and SRM platforms, which don t leverage peer-to-peer techniques, scale poorly in the number of subscribers or topics. QSM and Ricochet, where subscribers cooperate, scale well in these dimensions. Ricochet achieved the best recovery latency when message loss is an issue (but at relatively high overhead, not shown on these graphs). QSM at small loss rates achieves similar average latency with considerably lower network overheads, but if a packet is lost, it may take several seconds to recover it, making it less appropriate for timecritical applications. We don t see any single winner here: each of the solutions tested has some advantages that its competitors lack. Indeed, we re currently developing new P2P protocol suite, called SOLO [6]; it builds an overlay multicast tree within which events travel, and is capable of self-organizing in the presence of firewalls, network address translators (NAT) and bottleneck links. A separate project is creating a protocol suite that we call the Properties Framework [13]. The goal is to offer strong Throughput (k msgs/s) Number of Topics JGroups(2 nodes) JGroups(8 nodes) JGroups(32 nodes) JGroups(64 nodes) JGroups(110 nodes) QSM (110 nodes) Figure 6. Scalability QSM and JGroups (throughput for various numbers of topics)
9 Recovery Latency (ms) Number of Topics Figure 7. Delivery latency (ms) for SRM and Ricochet with varying numbers of topics. forms of reliability that can be customized for special needs. Thus, speed and scalability are only elements of a broader story. Developers will need different solutions for different purposes. By offering a flexible yet structured component mashup environment, Live Objects makes it possible to create applications that mix hosted with P2P content, and that can adapt their behavior, even at runtime, to achieve desired properties in a way matched to the environment. VII. PRIOR WORK SRM Ricochet The idea of integrating web services with peer-to-peer platforms is certainly not new ([2], [4], [8], [9], [10], [14], [15], [16], [20], [21]). The existing work falls roughly into two categories. The first line of research is focused on the use of peer-to-peer technologies, particularly JXTA, as a basis for scalable web service discovery. The second line of research concentrates on the use of replication protocols at the web service backend to achieve fault-tolerance. In both cases, P2P platforms such as JXTA are treated not as means of collaboration or media carrying live content, but rather as a supporting infrastructure at the data center backend. In contrast, our work is focused on blending the content available through P2P and web service protocols; neither technology is subordinate with respect to the other. Technologies that use peer-to-peer protocols to support live and interactive content have existed earlier; an excellent example of such technology is the Croquet [17] collaboration environment, in which the entire state of a virtual 3D world is stored in a peer-to-peer fashion and updated using a two-phase commit protocol. Other work in this direction includes [19]. However, none of these systems supports the sorts of componentized, layered architectures that we have advocated here. The types of peer-to-peer protocols these systems can leverage, and the types of a traditional hosted content they can blend with their P2P content, are limited. In contrast, our platform is designed from ground up with extensibility in mind; every part of it can be replaced and customized, and different components within a single mashup application can leverage different transport protocols. Prior work on typed component architectures includes a tremendous variety of programming languages and platforms, including early languages such as SmallTalk alongside modern component-based environments such as Java,.NET or COM, specialized component architectures such as MIT s Argus system, flexible protocol composition stacks such as BAST [5], service-oriented architectures such as Juni, and others. None of these, however, has been used in the context of integrating service-hosted and peer-to-peer content. Discussion of component integration systems and their relation to live objects, however, is beyond the scope of this paper. More details can be found in [12]. Finally, much relevant prior work consists of the scripting languages mentioned in the discussion above: JavaScript, Caja, Silverlight, and others. As explained earlier, our belief is that even though these languages are intended for fairly general use, they have evolved to focus on minibrowser situations in which the application lives within a dedicated browser frame, interacts directly with the user, and cannot be mixed with content from other sources in a layered fashion. Live Objects can support minibrowsers as objects, but we ve argued that by modeling hosted content at a lower level as components that interact via events and focusing on the multi-layered style of mashups as opposed to the standard tiled model, we gain flexibility. VIII. CONCLUSIONS To build ambitious collaboration application, the web services community will need ways to combine (to mash up ) content from multiple sources. These include hosted sources that run in data centers and support web services interfaces, but also direct peer-to-peer protocols capable of transporting audio, video, whiteboard data and other content at high data rates, with low latency. A further need is to allow disconnected collaboration, without mandatory reach-back to data centers. Our review of the performance of enterprise service bus eventing solutions in the standard hosted web services model made it clear that hosted event channels won t have the scalability and latency properties needed by many applications. P2P alternatives often achieve far better scalability, lower latency, and higher throughput. They also have security advantages: the data center doesn t get a chance to see (and save) every event. The Live Objects platform can seamlessly support applications that require a mixture of data sources, including both hosted and direct P2P event-stream data. Further benefits include an easy to use drag-and-drop programming style that yields applications represented as XML files, which can be shared as files or even via . Users that open such files find themselves immersed in a media-rich collaborative environment that also offers strong reliability, high performance, impressive scalability and (in the near future) a powerful type-driven security
10 mechanism. Most important of all, Live Objects are real: the platform is available for free download from Cornell. REFERENCES [1] Mahesh Balakrishnan, Ken Birman, Amar Phanishayee, Stefan Pleisch. Ricochet: Lateral Error Correction for Time-Critical Multicast. NSDI [2] Farnoush Banaei-Kashani, Ching-Chien Chen, Cyrus Shahabi. WSPDS: Web Services Peer-to-peer Discovery Service. ICOMP [3] Ken Birman, Anne-Marie Kermarrec, Krzysztof Ostrowski, Marin Bertier, Danny Dolev, Robbert van Renesse. Exploiting Gossip for Self-Management in Scalable Event Notification Systems. DEPSA [4] Jorge Cardoso. Semantic integration of Web Services and Peer-to- Peer networks to achieve fault-tolerance. IEEE GrC [5] Benoit Garbinato, Rachid Guerraoui. Flexible Protocol Composition in Bast. ICDCS [6] Qi Huang, Ken Birman. Self Organizing Live Objects (SOLO). Submission to DSN 2009; Dec [7] JMS Performance Comparison for Publish Subscribe Messaging. Fiorano Software Technologies Pvt. Ltd., February [8] Timo Koskela, Janne Julkunen, Jari Korhonen, Meirong Liu, Mika Ylianttila. Leveraging Collaboration of Peer-to-Peer and Web Services. UBICOMM [9] Shenghua Liu, Peep Küngas, and Mikhail Matskin. Agent-based web service composition with JADE and JXTA. SWWS [10] Federica Mandreoli, Antonio Perdichizzi, and Wilma Penzo. A P2P-based Architecture for SemanticWeb Service Automatic Composition. DEXA [11] Krzysztof Ostrowski, Ken Birman, Danny Dolev. QuickSilver Scalable Multicast (QSM). NCA [12] Krzysztof Ostrowski, Ken Birman, Danny Dolev, and Jong Hoon Ahnn. Programming with Live Distributed Objects. ECOOP [13] Krzysztof Ostrowski, Ken Birman, Danny Dolev, and Chuck Sakoda. Achieving Reliability Through Distributed Data Flows and Recursive Delegation. Submitted to DSN 2009; Dec [14] Mike Papazoglou, Bernd Krämer, and Jian Yang. Leveraging Web-Services and Peer-to-Peer Networks. CAiSE [15] Changtao Qu and Wolfgang Nejdl. Interacting the Edutella/JXTA Peer-to-Peer Network with Web Services. SAINT [16] Mario Schlosser, Michael Sintek, Stefan Decker, and Wolfgang Nejdl. A Scalable and Ontology-based P2P Infrastructure for Semantic Web Services. P2P [17] David Smith, Alan Kay, Andreas Raab, David Reed. Croquet:A Collaboration System Architecture. C5 03. [18] Sonic Performance test suite, available at: chmark/index.asp [19] Egemen Tanin, Aaron Harwood, Hanan Samet, Sarana Nutanong, Minh Tri Truong. A Serverless 3D World. GIS [20] Minjun Wang, Geoffrey Fox, and Shrideep Pallickara. A Demonstration of Collaborative Web Services and Peer-to-Peer Grids. ITCC [21] Zhenqi Wang, Yuanyuan Hu. A P2P Network Based Architecture for Web Service. WiCom 2007.
Integrating Web Messaging into the Enterprise Middleware Layer
The increasing demand for real-time data has companies seeking to stream information to users at their desks via the web and on the go with mobile apps. Two trends are paving the way: o Internet push/streaming
Dependability in Web Services
Dependability in Web Services Christian Mikalsen chrismi@ifi.uio.no INF5360, Spring 2008 1 Agenda Introduction to Web Services. Extensible Web Services Architecture for Notification in Large- Scale Systems.
Assurance in Service-Oriented Environments
Assurance in Service-Oriented Environments Soumya Simanta Research, Technology, and System Solutions (RTSS) Program Software Engineering Institute Carnegie Mellon University Pittsburgh 15232 28 th October,
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION Internet has revolutionized the world. There seems to be no limit to the imagination of how computers can be used to help mankind. Enterprises are typically comprised of hundreds
Enterprise Application Designs In Relation to ERP and SOA
Enterprise Application Designs In Relation to ERP and SOA DESIGNING ENTERPRICE APPLICATIONS HASITH D. YAGGAHAVITA 20 th MAY 2009 Table of Content 1 Introduction... 3 2 Patterns for Service Integration...
GUI Test Automation How-To Tips
www. routinebot.com AKS-Labs - Page 2 - It s often said that First Impression is the last impression and software applications are no exception to that rule. There is little doubt that
Why HTML5 Tests the Limits of Automated Testing Solutions
Why HTML5 Tests the Limits of Automated Testing Solutions Why HTML5 Tests the Limits of Automated Testing Solutions Contents Chapter 1 Chapter 2 Chapter 3 Chapter 4 As Testing Complexity Increases, So
What Is the Java TM 2 Platform, Enterprise Edition?
Page 1 de 9 What Is the Java TM 2 Platform, Enterprise Edition? This document provides an introduction to the features and benefits of the Java 2 platform, Enterprise Edition. Overview Enterprises today
Event-based middleware services
3 Event-based middleware services The term event service has different definitions. In general, an event service connects producers of information and interested consumers. The service acquires events
ZooKeeper. Table of contents
by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes...
How service-oriented architecture (SOA) impacts your IT infrastructure
IBM Global Technology Services January 2008 How service-oriented architecture (SOA) impacts your IT infrastructure Satisfying the demands of dynamic business processes Page No.2 Contents 2 Introduction
Contents. 1010 Huntcliff, Suite 1350, Atlanta, Georgia, 30350, USA
Sentinet Overview Contents Overview... 3 Architecture... 3 Technology Stack... 4 Features Summary... 6 Repository... 6 Runtime Management... 6 Services Virtualization and Mediation... 9 Communication and
CTX OVERVIEW. Ucentrik CTX
CTX FACT SHEET CTX OVERVIEW CTX SDK API enables Independent Developers, VAR s & Systems Integrators and Enterprise Developer Teams to freely and openly integrate real-time audio, video and collaboration
A very short history of networking
A New vision for network architecture David Clark M.I.T. Laboratory for Computer Science September, 2002 V3.0 Abstract This is a proposal for a long-term program in network research, consistent with the
Understanding the Impact of Running WAN Emulation with Load Testing
Understanding the Impact of Running WAN Emulation with Load Testing A Shunra Software White Paper July 2, 2008 Introduction Investment in pre-deployment performance testing has become a widely adopted
A standards-based approach to application integration
A standards-based approach to application integration An introduction to IBM s WebSphere ESB product Jim MacNair Senior Consulting IT Specialist Macnair@us.ibm.com Copyright IBM Corporation 2005. All rights
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani,
Properties Framework and Typed Endpoints for Scalable Group Communication
Properties Framework and Typed Endpoints for Scalable Group Communication Krzysztof Ostrowski, Ken Birman, and Danny Dolev Cornell University and The Hebrew University of Jerusalem Abstract Group communication
Developing Fleet and Asset Tracking Solutions with Web Maps
Developing Fleet and Asset Tracking Solutions with Web Maps Introduction Many organizations have mobile field staff that perform business processes away from the office which include sales, service, maintenance,
A Collaborative Framework for Scientific Data Analysis and Visualization
A Collaborative Framework for Scientific Data Analysis and Visualization Jaliya Ekanayake, Shrideep Pallickara, and Geoffrey Fox Department of Computer Science Indiana University Bloomington, IN, 47404
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance)
Distributed Objects and Components
Distributed Objects and Components Introduction This essay will identify the differences between objects and components and what it means for a component to be distributed. It will also examine the Java
Intelligent Content Delivery Network (CDN) The New Generation of High-Quality Network
White paper Intelligent Content Delivery Network (CDN) The New Generation of High-Quality Network July 2001 Executive Summary Rich media content like audio and video streaming over the Internet is becoming
Service Virtualization
Service Virtualization A faster, more efficient and less costly way to develop and test enterprise-class applications As cloud and mobile computing gain rapid acceptance, IT departments are expected to
Service virtualization and component applications
Message Driven SOA -- Enterprise Service Oriented Architecture Service virtualization and component applications Driving reusability and ROI in SOA deployments --- Atul Saini Entire contents Fiorano Software
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at. Published by ETH Zurich, Chair of Software Engineering JOT, 2009 Vol. 8, No. 3, May-June 2009 Cloud Computing Benefits and Challenges! Dave Thomas
Understanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX
APPENDIX A Introduction Understanding TCP/IP To fully understand the architecture of Cisco Centri Firewall, you need to understand the TCP/IP architecture on which the Internet is based. This appendix
IBM WebSphere application integration software: A faster way to respond to new business-driven opportunities.
Application integration solutions To support your IT objectives IBM WebSphere application integration software: A faster way to respond to new business-driven opportunities. Market conditions and business
System Models for Distributed and Cloud Computing
System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems
Oracle Identity Analytics Architecture. An Oracle White Paper July 2010
Oracle Identity Analytics Architecture An Oracle White Paper July 2010 Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may
Chapter-1 : Introduction 1 CHAPTER - 1. Introduction
Chapter-1 : Introduction 1 CHAPTER - 1 Introduction This thesis presents design of a new Model of the Meta-Search Engine for getting optimized search results. The focus is on new dimension of internet
White Paper. Requirements of Network Virtualization
White Paper on Requirements of Network Virtualization INDEX 1. Introduction 2. Architecture of Network Virtualization 3. Requirements for Network virtualization 3.1. Isolation 3.2. Network abstraction
PQoS Parameterized Quality of Service. White Paper
P Parameterized Quality of Service White Paper Abstract The essential promise of MoCA no new wires, no service calls and no interference with other networks or consumer electronic devices remains intact
E-Business Technologies for the Future
E-Business Technologies for the Future Michael B. Spring Department of Information Science and Telecommunications University of Pittsburgh spring@imap.pitt.edu Overview
Chapter 6 Essentials of Design and the Design Activities
Systems Analysis and Design in a Changing World, sixth edition 6-1 Chapter 6 Essentials of Design and the Design Activities Chapter Overview There are two major themes in this chapter. The first major
Enterprise Private Cloud Storage
Enterprise Private Cloud Storage The term cloud storage seems to have acquired many definitions. At Cloud Leverage, we define cloud storage as an enterprise-class file server located in multiple geographically,
Five best practices for deploying a successful service-oriented architecture
IBM Global Services April 2008 Five best practices for deploying a successful service-oriented architecture Leveraging lessons learned from the IBM Academy of Technology Executive Summary Today s innovative,
AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK
Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,
The Complete Performance Solution for Microsoft SQL Server
The Complete Performance Solution for Microsoft SQL Server Powerful SSAS Performance Dashboard Innovative Workload and Bottleneck Profiling Capture of all Heavy MDX, XMLA and DMX Aggregation, Partition,
Getting Started with Service- Oriented Architecture (SOA) Terminology
Getting Started with - Oriented Architecture (SOA) Terminology Grace Lewis September 2010 -Oriented Architecture (SOA) is a way of designing, developing, deploying, and managing systems it is neither a
Implementing Scalable Publish-Subscribe in a Managed Runtime Environment
Implementing Scalable Publish-Subscribe in a Managed Runtime Environment Krzysztof Ostrowski, Ken Birman Cornell University Abstract The reliable multicast, publish-subscribe, and group communication paradigms
CA XOsoft Content Distribution v4
PRODUCT BRIEF: CA XOSOFT CONTENT DISTRIBUTION CA XOsoft Content Distribution v4 CA XOSOFT CONTENT DISTRIBUTION (FORMERLY CA XOSOFT WANSYNC CD) IS A SIMPLE, HIGHLY FLEXIBLE AND COST-EFFECTIVE CONTENT DELIVERY,
Managing a Fibre Channel Storage Area Network
Managing a Fibre Channel Storage Area Network Storage Network Management Working Group for Fibre Channel (SNMWG-FC) November 20, 1998 Editor: Steven Wilson Abstract This white paper describes the typical
Sentinet for BizTalk Server SENTINET
Sentinet for BizTalk Server SENTINET Sentinet for BizTalk Server 1 Contents Introduction... 2 Sentinet Benefits... 3 SOA and APIs Repository... 4 Security... 4 Mediation and Virtualization... 5 Authentication
What You Need to Know About Transitioning to SOA
What You Need to Know About Transitioning to SOA written by: David A. Kelly, ebizq Analyst What You Need to Know About Transitioning to SOA Organizations are increasingly turning to service-oriented architectures
Testing Network Performance with Real Traffic
Testing Network Performance with Real Traffic Mike Danseglio 1. 8 0 0. 8 1 3. 6 4 1 5 w w w. s c r i p t l o g i c. c o m / s m b I T 2011 ScriptLogic Corporation ALL RIGHTS RESERVED. ScriptLogic, the
Northrop Grumman White Paper
Northrop Grumman White Paper A Distributed Core Network for the FirstNet Nationwide Network State Connectivity to the Core Network April 2014 Provided by: Northrop Grumman Corporation Information Systems
Business Cases for Brocade Software-Defined Networking Use Cases
Business Cases for Brocade Software-Defined Networking Use Cases Executive Summary Service providers (SP) revenue growth rates have failed to keep pace with their increased traffic growth and related expenses,
Sentinet for Windows Azure SENTINET
Sentinet for Windows Azure SENTINET Sentinet for Windows Azure 1 Contents Introduction... 2 Customer Benefits... 2 Deployment Topologies... 3 Isolated Deployment Model... 3 Collocated Deployment Model... High Availability and Resiliency of the Pertino Cloud Network Engine
The High Availability and Resiliency of the Pertino Cloud Network Engine Executive summary The emergence of cloud network architectures can be directly attributed to the evolution of business IT. As the
System types. Distributed systems
System types 1 Personal systems that are designed to run on a personal computer or workstation Distributed systems where the system software runs on a loosely integrated group of cooperating processors
Part 2: The Neuron ESB
Neuron ESB: An Enterprise Service Bus for the Microsoft Platform This paper describes Neuron ESB, Neudesic s ESB architecture and framework software. We first cover the concept of an ESB in general in
Developers Integration Lab (DIL) System Architecture, Version 1.0
Developers Integration Lab (DIL) System Architecture, Version 1.0 11/13/2012 Document Change History Version Date Items Changed Since Previous Version Changed By 0.1 10/01/2011 Outline Laura Edens 0.2
Corporate PC Backup - Best Practices
A Druva Whitepaper Corporate PC Backup - Best Practices This whitepaper explains best practices for successfully implementing laptop backup for corporate workforce. White Paper WP /100 /009 Oct 10 Table
Middleware- Driven Mobile Applications
Middleware- Driven Mobile Applications A motwin White Paper When Launching New Mobile Services, Middleware Offers the Fastest, Most Flexible Development Path for Sophisticated Apps 1 Executive Summary
Bringing Autonomic, Self-Regenerative Technology into Large Data Centers
Bringing Autonomic, Self-Regenerative Technology into Large Data Centers Ken Birman, Dept. of Computer Science, Cornell University 1 Abstract With the introduction of blade servers and the emergence.
Cisco Context-Aware Mobility Solution: Put Your Assets in Motion
Cisco Context-Aware Mobility Solution: Put Your Assets in Motion How Contextual Information Can Drastically Change Your Business Mobility and Allow You to Achieve Unprecedented Efficiency What You)
Realizing a Vision Interesting Student Projects
Realizing a Vision Interesting Student Projects Do you want to be part of a revolution? We are looking for exceptional students who can help us realize a big vision: a global, distributed storage
PI Cloud Connect Overview
PI Cloud Connect Overview Version 1.0.8 Content Product Overview... 3 Sharing data with other corporations... 3 Sharing data within your company... 4 Architecture Overview... 5 PI Cloud Connect and PI
Tactical Service Bus: The flexibility of service oriented architectures in constrained theater environments
Tactical Bus: The flexibility of service oriented architectures in constrained theater environments Tactical Edge in NATO Context Tactical still very much under control of national forces: Zone of Operations
VIA CONNECT PRO Deployment Guide
VIA CONNECT PRO Deployment Guide Infinite Ways to Collaborate CONTENTS Introduction... 3 User Experience... 3 Pre-Deployment Planning... 3 Connectivity... 3 Network Addressing...
THE CONVERGENCE OF NETWORK PERFORMANCE MONITORING AND APPLICATION PERFORMANCE MANAGEMENT
WHITE PAPER: CONVERGED NPM/APM THE CONVERGENCE OF NETWORK PERFORMANCE MONITORING AND APPLICATION PERFORMANCE MANAGEMENT Today, enterprises rely heavily on applications for nearly all business-critical
|
http://docplayer.net/1804788-Building-collaboration-applications-that-mix-web-services-hosted-content-with-p2p-protocols-1.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
1 package org.odmg;2 3 4 5 /**6 7 * This exception is thrown when a call has been made to a method that8 9 * should not be called when a transaction is in progress.10 11 * @author David Jordan (as Java Editor of the Object Data Management Group)12 13 * @version ODMG 3.014 15 */16 17 18 19 public class TransactionInProgressException extends ODMGRuntimeException20 21 {22 23 /**24 25 * Constructs an instance of the exception.26 27 */28 29 public TransactionInProgressException()30 31 {32 33 super();34 35 }36 37 38 39 /**40 41 * Constructs an instance of the exception with the provided message.42 43 * @param msg The message explaining the exception.44 45 */46 47 public TransactionInProgressException(String msg)48 49 {50 51 super(msg);52 53 }54 55 }56 57
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/odmg/TransactionInProgressException.java.htm
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
So, I'm just starting to learn C++ and I'd like to start off by saying thank you to all those that created this site. It's been great and the guides are very helpful. :)
During the course of trying out example code for arrays, I ran into an issue where the code doesn't terminate when expected. In fact, it just goes until my computer gives an error response. So my question is: given the code below, which is a copy of what I have entered and is almost a copy/paste from the example, why doesn't it terminate on its own?
Thanks in advance. I know this is probably really basic stuff, but I only started yesterday. :)Thanks in advance. I know this is probably really basic stuff, but I only started yesterday. :)Code:
#include <iostream>
using namespace std;
int main()
{
int x;
int y;
int array[8][8];
for ( x = 0; x < 8; x++ ) {
for ( y = 0; y < 8; y++ )
array[x][y]= x * y;
}
cout<<"Array Indices:\n";
for ( x = 0; x < 8; x++ ) {
for ( y = 0; x < 8; y++ )
cout<<"["<<x<<"]["<<y<<"]="<< array[x][y] <<" ";
cout<<"\n";
}
cin.get();
}
|
https://cboard.cprogramming.com/cplusplus-programming/136919-question-about-array-code-printable-thread.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Hi, this isn’t a specific fmod related question but I’ve seen that a few people on here are working on beat detection in some way. I’m just having some trouble getting what the general idea behind it is. I can understand why you would use the autocorrelation function which I have covered in maths which is helpful. My main problem is what the fourier transform is used for which seems to be the main part of the analysis. As far as I can understand it gets you the frequencies contained in the amount of data that you look at. So how would this be used when you need to preserve time sortof to try and see when it correlates with itself. If anyone can help explain anything related to this area I would really appreciate it, cheers.
Matt
- identitycrisisuk asked 13 years ago
- You must login to post comments
I think I found this paper using google. Just enter the search query “beat detection algorithms” et voilà.
You guys seem to be on the case with the beat detection stuff ( , i’ve got a working solution for finding good in/out cue points. Have any of you done anything like this….
As the moment i take a fixed length sample ( say 10 seconds ) and track the energy looking for steady rises or hits in volume to identify tracks that ‘ start with a bang’ or fade out with a wimper….
any idea?
Hmm, I have not made any efforts to find a good cue position, since I just use the monitor (headphones connected to second soundcard) to find one.
I find it more important now to write a good beat detection algorithm: once you can detect the beats, you can also find the first beat, can’t you?
However, the finding of a good cue point also depends on how the piece of music is built up, most songs are composed of measures (not sure if this is the right word) of 4, 8, 16, 32, … beats, and only the first beat of such a measure is a good cue position.
I think it’s very difficult to detect which beat is the “first” beat of a measure, sometimes the first beat is the strongest, but it can also be the second or third one.
Furthermore, I forgot a little piece of code of my beat detection algorithm, here it comes:
[code:1omkbokb]
int w(int i) {
return (int)(2.0/3.0 * i + 6.0);
}
[/code:1omkbokb]
Good luck to both of you, unfortunately I do not have the time to develop the algorithm further at this moment (I have a master thesis to write), but when I have more time I’ll definitely look at all this stuff again.
Stijn
Hi,
I spent months writing beat matching algorithms using both FFT and straight volume searching. I have had a lot of success with just scanning the volume thresholds of the waveform. I have pages and pages of commented out old algorithm code :/
There are many factors that mess up beat matching not least bent or wobbly tracks or tracks that change pitch in the middle. The only way to guarantee straight tracks is to encode them yourself ;D Other factors are the structure of the music. For instance, dance music is a 4/4 beat and generally has a break beat every 64 beats. Although a lot of dance music is constructed in this way there are many exceptions … Living Joy – Dreamer is a good example. It plays the the 1st break (64 beats) perfectly, then the 2nd break has 72 beats, then the next three breaks follow the conventional 64 beat spacing again. Although mixed with another track it can sound good, it can also sound awful if break beats are not lined up :/ Then, different genras of music obey different rules, reggae for instance does not follow a 4/4 beat structure, 4/3 i think it is. Classical music speeds up and slows down all over the place. My concusion is it would not be possible to make a beat matching algorithm that works on every piece of music no matter what!
The biggest problem I have found with using FFT, apart from the speed issues, is that once you have found a good beat how do u tie the position to exact position in the waveform? For instance running an FFT using lets say 2048 samples means that the cue point is accurate to within 2048 samples. Overlapping the FFT with previous one helps but is just too slow as more overlapping occurs. I read about how SETI do their FFT and they run a large scale FFT, then gradually decrease the the FFT size over multiple passes. This is not really fast enough for realtime processing.
Finding the BPM is less than half the battle even with variable BPM tracks, finding the exact position of a beat is the key. You only have to play the same track twice and shift one by one sample to hear a discernable difference. Sample accuracy is extremely important. There are a number of DJ’ing programs out currently and I have not seen any with less than millisecond accuracy and 1ms is massive in the realms of mixing two bass beats.
I was well chuffed to work out that the software I am working on is accurate to 1/50,000,000 of a second!!! Cant wait to release it soon :DDD
Just … so much to do! keep this beat matching thread alive ;D
TS
Cool, yeah I’d be up for keeping this thread alive. The more information the better as my project is really research based even tho I will have to get something practical working. I am actually thinking of doing some work on just analysing the waveform this week as I’m having trouble getting fftw into a form I could use on windows. I read a paper from 1993 on using the autocorrelation function which was under very specific conditions but still might give peaks that would suggest a tempo.
Thinking about actually queing the tracks, if you’re pretty sure of the tempo and you edit the track so that it starts on a bar you could probably quite simply place markers right through on the start of each bar. Soundforge has a function to do this and you could probably program it too. Obviously won’t work on any songs with changing tempo etc. but it would be an easy solution. Other than that I’ve heard that the cross correlation function is the way to go for syncronising two songs. Aaaah and this is why you’ll never kill real DJs 😉
IMHO the place to get all this sort of info is in Computer Music Journal, published by MIT press. It can be found in most university/conservatorium libraries, or subscribed to in online form.
CMJ is where the really heavy research into digital audio get published.
- lorien answered 13 years ago
Sorry to try and drag this thread back to life but it’s just cos someone mentioned using BASS to detect the tempo of a song. I’ve downloaded it but there seems to be no mention of it whatsoever unless it’s an extension feature (in the pack which is currently unavailable). Does anyone know how I can get hold of it or does anyone know of any other free methods of tempo calculation? I know that’s very unlikely but you never know.
I’m just trying to find a fallback incase I can’t get an algorithm of my own working so I can still write an application of some kind for my hons proj. I’m hoping to implement a comb filter next semester which might be the answer to all my problems but if I have a backup it’ll give me a bit more security and also help hold up my planning document. Cheers guys (and gals if any)
EDIT: Oh yeah, if the mods are wondering I will stil use FMOD for most of the code cos the streaming is so wonderful. Just incase you thought I was being cheeky posting here 😉
Hi Mate,
Basically you would do the FFT on the stream either real time or before-hand with the aim of searching for the specific frequencies relating to the particular instruments needed to ‘beat match’ the track.
For instance a drum would appear in the very low frequency bands whereas a hi-hat/symbol would appear further up the spectrum. The general idea of a track running at 120BPM is that it would have a beat every half a second so the FFT should produce peaks for the main instrument (usually a drum) twice a second.
Obviously it would be a boring track if it had a drum beat throughout without missing a few or adding in other instruments. So, beat matching code should be able to locate the peaks in the spectrum at regular intervals and anticipate where the missing ones should be for the whole track so that both the BPM and the position of the strongest part of the beat can be found.
You probably knew or worked this out already anyway 8)
Hope it helps a little though 😀
TS
they have removed the bass-fx-dll that supplied an accurate bpm in a few seconds. it’s beeing re-released in a few weeks ??/, i’ll forward you the dll from your PM.
ps.. It work very well, i’ve matched it against other system. traktor and mixmeister and it’s seems spot on, however you seem to get more accuracy if the file has been adjusted with replay gain, i might wrap the dll to include the replay gain also.
t’ta
OK, this is going to hurt, but below is an easy way to get BPM detection, maybe this will kick someone into developing the same for FMOD.
Download BASS, it’s comes with a free BPM detection unit, i use BASS to build an audio database with BPM energy, gain and stuff and then use FMOD for the rest. I hate BASS but it’s got a free BPM dll and it works a dream. You probably won’t be able to use bass at the same time as FMOD for real-time BPM stuff.
PS. BASS is horrible, but as everybody is keeping their BPM routines ( GOOD ONES ) to themselves ( am i’m too thick to do my own ) then i make do..
As I told in another thread, I tried to write a beat detection algorithm for myself. It is messy and doesn’t work very well (yet), but it might inspire you a bit to write a better one :).
My algorithm is based on the [url=]Beat Detection Algorithms[/url:skd00i39] paper. (especially page 9 to 11)
It handles about the possibility of detecting the beats themselves, but not on how you can find a regular beat pattern, which is needed to calculate the BPM value. So I invented something for myself, as you can see in my code. (below the line “// try to find a regular beat on each subband” )
I have written some comments between my code to make my thoughts clear, I hope you understand them a bit.
As I told, it basically works and seems to calculate an accurate BPM value for some songs (especially those with a strong beat.), but there is still a lot of room for improvement. I was thinking of following things as possible improvements:
- Use a derivation filter to detect the beats themselves (as stated on page 13 of the “Beat Detection Algorithms” paper)
- Find a better way to find a regular beat pattern on each subband. (I guess this is the most tricky and most importand part)
- Use another measure to determine which subband contains the most regular beat pattern.
I think it would result in a very accurate algorithm if those things get straight, but I do not have the time to work at it at this moment.
This is my code:
[code:skd00i39]fftw_complex fgFFTin; //input array for FFT
fftw_complex *fgFFTout; // output array for FFTW
fftw_plan fgFFTp; // the FFT plan :)
double history[32][43]; // history buffer
int historycount[32]; // how much values are stored in the history buffer?
int beattimer[32]; // used to count the number of blocks between the current and the previous beat.
vector<int> intervals[32]; // the values of the beattimer are stored in an interval history, so we can calculate the expected time as an average of those values.
int blockswithoutbeat[32]; // this is used to check if there is enough time between every 2 beats (a minimum of 0.3 sec seems a good value to me)
int mode[32];
/
the mode indicates the state of the beat detection in each subband.
if it is equal to 0, the "expected time" between 2 beats is not known,
so we try to detect 2 beats and calculate the time between them, and use that value as the new "expected time" between 2 beats.
(and store it in the intervals[i] vector). Then we move on to mode 1, which means that everything is going OK: the expected time between 2 beats is known,
and the newly arriving beats occur at the expected times.
When a newly arriving "beat" occurs too early or to late (i.e. not at the expected time), the mode number is increased.
Otherwised, it is decreased (with a minimum value of 1, because 0 is the initialization mode)
When the mode reaches 10, it means that there have been a lot of beats that were not at the expected time, so we have to reinitialize.
*/
double bpm[32];
int bpmAccuracy[32];
double fgBPM;
double fgBPMAccuracy;
void init() {
fgFFTin = (double(*)[2])fftw_malloc(sizeof(fftw_complex) * 1024); fgFFTout = (double(*)[2])fftw_malloc(sizeof(fftw_complex) * 1024); fgFFTp = fftw_plan_dft_1d(1024, fgFFTin, fgFFTout, FFTW_FORWARD, FFTW_MEASURE);
}
void* dspcallback(void *originalbuffer,void *newbuffer,int length,int param) {
signed short *stereo16bitbuffer = (signed short *)originalbuffer;
int N = 32; // number of subbands
for (int loop = 0; loop < 17; loop++) { // the original buffer consists of 17 * 1024 samples
// fill the input array for the FFT for (int count = 0; count < 1024; count++) { fgFFTin[count][0] = *stereo16bitbuffer++; fgFFTin[count][1] = *stereo16bitbuffer++; } fftw_execute(fgFFTp); // compute the square of the modulo of the complex numbers that are returned by FFT double B[512]; for (int count = 0; count < 512; count++) { B[count] = sqrt(fgFFTout[count][0] * fgFFTout[count][0] + fgFFTout[count][1] * fgFFTout[count][1]); } double Es[N]; double Ei[N]; // divide the spectrum in N subbands, on a logaritmic scale for (int i = 0; i < N; i++) { Es[i] = 0; int fromK = 0; int toK = 0; for (int j = 0; j < i; j++) { fromK += w(j); } for (int j = 0; j < i+1; j++) { toK += w(j); } for (int k = fromK; k < toK; k++) { Es[i] += B[k]; } Es[i] *= (double)w(i); Es[i] /= 512.0; // don't know why :) Ei[i] = 0; // compute average energy for subband i for (int k = 0; k < 43; k++) { Ei[i] += history[i][k]; } if (historycount[i] > 43) { Ei[i] /= 43.0; } else { Ei[i] /= historycount[i]; } } // shift history buffer one position to the right and insert new computed value at first position for (int i = 0; i < N; i++) { for (int k = 42; k > 0; k--) { history[i][k] = history[i][k-1]; } history[i][0] = Es[i]; if (historycount[i] < 43) historycount[i]++; } bool beat = false; // try to find a regular beat on each subband for (int i = 0; i < N; i++) { if (Es[i] > 7500 && Es[i] > 4.0*Ei[i]) { // there is a beat, should we accept it? // there should be a certain amount of time between every 2 beats. // 13 blocks ~= 0.3 sec, which means that ~= 200bpm is the highest beat rate that is acceptable. if (blockswithoutbeat[i] >= 13) { if (mode[i] == 0) { // we are in the initialisation mode if (beattimer[i] == 0) { beattimer[i]++; blockswithoutbeat[i] = 0; } else { intervals[i].clear(); // since this is the initialisation phase, we clear the interval history intervals[i].push_back(beattimer[i]); mode[i] = 1; beattimer[i] = 0; blockswithoutbeat[i] = 0; // at this point, we have established the time between the first and the second beat, // and it is stored in intervals[i]. } } else if (mode[i] < 10) { double expectedtime = 0; if (intervals[i].size() == 0) { exit(1); // this shouldn't happen } // the expected time between the current and the previous beat is the average of the time intervals between all the previous beats. for (vector<int>::iterator p = intervals[i].begin();p != intervals[i].end(); p++) { expectedtime += *p; } expectedtime /= (double) intervals[i].size(); bpm[i] = 60.0 / ((1024.0 / 44100.0) * expectedtime); double quotient = expectedtime / (double)beattimer[i]; // this value indicates how much the time between the current and the // previous beat deviates from the expected time. If it is equal to 1, the current beat occured exactly at the expected moment. double margin = 0.5 / (double) (intervals[i].size()); // the larger the size of the interval history, // the less tolerant we will be when deciding if a beat is "on time" or not. if (margin > 0.1) margin = 0.1; if (quotient < 1.0 + margin && quotient > 1.0 - margin ) { // the beat occured at the expected time... bpmAccuracy[i]++; // ... which means that this subband may provide an accurate beat rate intervals[i].push_back(beattimer[i]); if (intervals[i].size() > 25) { // we keep a maximum of 25 intervals in the interval history. intervals[i].erase(intervals[i].begin()); if (intervals[i].size() != 25) { cout << "Strange things happening!" << endl; exit(1); } } beattimer[i] = 0; if (mode[i] > 2) mode[i]-= 2; else if (mode[i] > 1) mode[i]--; blockswithoutbeat[i] = 0; } else if (beattimer[i] < expectedtime) { // the beat was too early, so we don't reset the beattimer. beattimer[i]++; mode[i]++; } else { if (quotient < (1.0 + margin)/2.0 && quotient > (1.0 - margin)/2.0) { // we have probably "missed" a beat, since the actual time is +/- the double of the the excepted time beattimer[i] = 0; } else { // don't know why I did this beattimer[i] -= (int)expectedtime; } mode[i] += 2; blockswithoutbeat[i] = 0; } } else { // we lost track, so we go back to initialisation mode mode[i] = 0; beattimer[i] = 0; intervals[i].clear(); bpm[i] = -1.0; blockswithoutbeat[i] = 0; bpmAccuracy[i] = 0; } } else { blockswithoutbeat[i]++; if (blockswithoutbeat[i] > 100) { bpmAccuracy[i] = -1; bpm[i] = -1; mode[i] = 0; } beattimer[i]++; } } else { blockswithoutbeat[i]++; if (mode[i] > 0 || beattimer[i] > 0) beattimer[i]++; } } // try to find the subband with the most accurate beat detection. int max = 0; for (int i = 0; i < 32; i++) { if (bpmAccuracy[i] >= bpmAccuracy[max] && bpm[i] > 40.0 && bpm[i] < 240.0) max = i; } if (bpmAccuracy[max] > 0) { fgBPM = bpm[max]; // we try to keep the displayed bpm value between 60 and 180. if (fgBPM < 60) { fgBPM *= 2.0; } if (fgBPM > 180) { fgBPM /= 2.0; } fgBPMAccuracy = bpmAccuracy[max]; } else { fgBPM = -1; fgBPMAccuracy = 0; } } return originalbuffer;
}
[/code:skd00i39]
It can also be found [url=]here[/url:skd00i39].
I hope this helps anyone, and, if you get it improved, please let me know!
You can contact me at stijn (dot) lamens (at) ua (dot) ac (dot) be for further questions or comments if I seem dead on this forum.
Stijn
Thanks guys, looks like a wealth of information there. I think I was getting confused about the the FFT stuff thinking that I would be using the whole spectrum that was calculated. I’d actually looked at another paper already that had talked about finding peaks in the spectrum to identify the kick and snare drum and then look for alternating patterns of these (probably good for more types of music than those with a kick drum on every beat).
I may look at the Bass thing, especially if they have any info on how it’s done. It’s probably not really something I can use though as this is the topic of my university honours project. Well maybe I could but it would require changing the focus of my investigation to look at what would make a good virtual DJ maybe in terms of how it plays tracks and stuff.
I must have missed that paper somehow, how did you find it? I’ve got lots of papers already, mostly kinda high level and not about the actual implementation of it. Scheirer gets referenced quite a lot and I can’t remember the name of the other one but thats where i got the kick and snare drum related one. I really appreciate you sharing your code as well, I’ve been wanting more code to try and understand what happens in a dsp callback so that really is spot on. Even if you say it is messy I’m sure it will push me forward and if I do make anything interesting I’ll get back.
Cheers all,
Matt
|
https://www.fmod.org/questions/question/forum-7200/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Fay is a proper subset of Haskell that compiles to JavaScript. Thus it is by definition a statically typed lazy pure functional language. If you want a more thorough introduction to Fay, please read Paul Callaghan’s Web Programming in Haskell and Oliver Charles’s 24 Days of Hackage: fay.
The original intention of Fay is to use Haskell on the client side. If you use a Haskell web framework such as Yesod or Snap, using Fay you can use the same language on both client and server sides and some code can actually be shared.
However, because Fay is simply a subset of Haskell that compiles to JavaScript with no dependencies on the client side, you can use it on the server side too in combination with Node.js. I am not saying it is actually a good idea to write server code in Fay, but it is at least fun to investigate the feasibility. Here is a web server example written in Fay.
{-# LANGUAGE EmptyDataDecls #-} module Hello where
EmptyDataDecls is required because JavaScript types are represented by empty data declarations in Fay.
import FFI
FFI module provides a foreign function interface.
data Http data HttpServer data Request data Response
Http,
HttpServer,
Request and
Response are JavaScript types we use in this example. They are represented by empty data declarations.
requireHttp :: Fay Http requireHttp = ffi "require('http')"
This is a simple example of a FFI declaration. It returns the result of
require('http') as a
Http instance. Fay is a monad which is similar to IO monad. Because a FFI function often has side effects, Fay monad is used to represent this.
createServer :: Http -> (Request -> Response -> Fay ()) -> Fay HttpServer createServer = ffi "%1.createServer(%2)" consoleLog :: String -> Fay () consoleLog = ffi "console.log(%1)" listen :: HttpServer -> Int -> String -> Fay () listen = ffi "%1.listen(%2, %3)" writeHead :: Response -> Int -> String -> Fay () writeHead = ffi "%1.writeHead(%2, %3)" end :: Response -> String -> Fay () end = ffi "%1.end(%2)"
These FFI declarations use
%1,
%2 that corresponds to the arguments we specify in the type. Most Fay types are automatically serialized and deserialized. Note that we can only use point free style in FFI functions.
main :: Fay () main = do http <- requireHttp server <- createServer http (\req res -> do writeHead res 200 "{ 'Content-Type': 'text/plain' }" end res "Hello World\n" ) listen server 1337 "127.0.0.1" consoleLog "Server running at"
main is the entry point to our web server example. Its return type is
Fay () because a Fay program can’t do anything without interacting with the world outside. Because we already wrapped all the Node.js APIs we use, we can program as if we write a normal Haskell program.
Compare our Fay web server program with the original Node.js program. Except for the FFI bindings, the main code is almost the same as before. However, our version is much more type-safe!
var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(1337, '127.0.0.1'); console.log('Server running at');
|
http://kseo.github.io/posts/2014-03-11-fay-with-nodejs.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
Hi all,
I am new to silverlight and a bit struggling in understanding this framework... :(
Can someone please tell me how to use this framework to make the DataGridTextColumn bindable in a viewmodel way?
Many thanks!
Xin
Hi Xin,
Below is a sample code that shows how to make a Bindable DataGridTextColumn class, in this case we have just made the Header property bindable but you
could change other stuff too.
public class
BindableDataGridTextColumn : DataGridTextColumn
{
public static
readonly DependencyProperty HeaderProperty =
DependencyProperty.Register("Header",
typeof(object),
typeof(BindableDataGridTextColumn),
new PropertyMetadata(OnHeaderChanged));
private Dictionary<DependencyProperty,
BindingInfo> _pendingBindings;
private FrameworkElement _associatedObject;
public FrameworkElement AssociatedObject
{
get { return _associatedObject; }
set
{
_associatedObject =
value;
if (value !=
null) ApplyAllPendingBindings();
}
}
public Binding HeaderBinding
get { return GetBinding(HeaderProperty); }
set { SetBinding<object>(HeaderProperty,
value); }
#region Binding Helpers
protected void SetBinding
(DependencyProperty bindingProperty,
Binding value)
SetBinding(bindingProperty,
typeof(P), value);
protected void SetBinding(DependencyProperty bindingProperty,
Type bindingType,
Binding value)
if (this.AssociatedObject !=
null)
this.SetAttachedBinding(AssociatedObject, bindingProperty, bindingType, value);
else
{
if (_pendingBindings ==
null)
_pendingBindings =
new Dictionary<DependencyProperty,
BindingInfo>();
_pendingBindings.Add(bindingProperty,
new BindingInfo() { BindingProperty = bindingProperty, BindingType = bindingType, Binding = value });
protected Binding GetBinding(DependencyProperty bindingProperty)
return this.GetAttachedBinding(bindingProperty);
else if (_pendingBindings !=
null && _pendingBindings.ContainsKey(bindingProperty))
return _pendingBindings[bindingProperty].Binding;
return null;
protected void ApplyAllPendingBindings()
if (_pendingBindings ==
null || _pendingBindings.Count == 0) return;
// set all bindings
foreach (var _bindingInfo
in _pendingBindings.Values)
this.SetAttachedBinding(AssociatedObject, _bindingInfo);
// clear the pending
_pendingBindings.Clear();
_pendingBindings =
null;
protected void ClearAllBindings()
if (_pendingBindings !=
null) _pendingBindings = null;
this.ClearAttachedBindings();
#endregion
#region Handlers
static void OnHeaderChanged(DependencyObject d,
DependencyPropertyChangedEventArgs e)
((BindableDataGridTextColumn)d).UpdateHeader(e.NewValue);
void UpdateHeader(Object header)
this.Header = header;
}
Now, there is one thing you need to supply before it will work –the AssociatedObject (to which the binding “attaches”) of type FrameworkElement. For we
could just loop n’ set the DataGrid itself as the associated object:
foreach (var _col
in this.dataGrid.Columns)
{
if (typeof(BindableDataGridTextColumn).IsAssignableFrom(_col.GetType()))
{
((BindableDataGridTextColumn)_col).AssociatedObject =
this.dataGrid;
}
A better way would be to make the setting of the AssociatedObject a behavior, and perhaps introduce an Interface.
Hope that helps,
Rishi
Could you please help me with the binding sintax that I think is going to be something like Header = { Binding HeaderBinding } but how can I read the current value that will be displayed in the column header? I'm trying to i18n the headers of
my SL application.
Thanks!
Paul
Thanks
Orktane and sorry for the late reply.
I think I will wait until Silverlight 4 comes out because with that version we will be able to bind any Dependency Objects so the datagrid column header will be bindable... :)
@pauljs, With the BindableDataGridTextColumn you will use the HeaderBinding property, so something like will work:
<BindableDataGridTextColumn HeaderBinding={Binding SalesTitle} ... />
where SalesTitle is like a property in your ViewModel. Now for more realistic solutions rather than having per column header property in your ViewModel, you will use some kind of TypeDescriptor or Reflected meta-data from your Model.
@Xin, absolutely SL4 will make binding life so much easier.. also, if you find the BindableDataGridTextColumn thing a bit overwhelming, there is another solution for SL3 - you can make use of ValueRelays. Basically, you declare ValueRelays as a resource,
and then using BridgeValueBehavior you can source the value from your ViewModel. Once you've bridged to your ViewModel, you can bind the ValueRelay to your header with StaticResource binding (eg. Header = {StaticResource SalesTitleRelay}). The problem
with this that is you'll have to do this per column - so that's the trade-off.
Value Relays are described here and
it works just like Command Relays sample described in that post.
Thanx! It worked! Now, regarding to having a per column header property in the ViewModel, my approach is to have a property in the ViewModel that returns the generated resource strongly typed class, so I can access the values from the xaml.
Thanks again!
Paul.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://nroute.codeplex.com/discussions/78167
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
UFDC Home | Help | RSS Group Title: TropSoils field research brief ;, 34 Title: Farmer and crop responses to different sources of fertilizers : : a farmer-managed study on home gardens CITATION THUMBNAILS PAGE IMAGE ZOOMABLE Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: Farmer and crop responses to different sources of fertilizers : : a farmer-managed study on home gardens Series Title: TropSoils field research brief ;, 34 Physical Description: 9 leaves ; 28 cm. Language: English Creator: Agus, Fahmuddin.Soil Management Collaborative Research Support Program.Lembaga Penelitian Tanah. Publisher: Soil Management Collaborative Research Support Program, North Carolina State University, Publication Date: 1987 Subjects Subject: Fertilizers -- Research -- Indonesia.Soil management -- Indonesia.Farmers -- Attitudes. -- Indonesia Spatial Coverage: Indonesia. Notes General Note: Caption title. General Note: "January 1987." General Note: At head of title: Centre for Soil Research. Record Information Bibliographic ID: UF00080610 Volume ID: VID00001 Source Institution: University of Florida Rights Management: All rights reserved by the source institution and holding location. Resource Identifier: oclc - 162136243 Full Text PUSLITTAN TROPSOILS C CSR) FIELD RESEARCH BRIEF Centre for Soil Research; J Ir. H. Juanda 98; Boror: Indonesia (0251) 23012 Contact: TROPSOILS; Box 02; Sitiung 1A; Sumatera Barat DATE: JANUARY 1987 NUMBER: 34 TITLE: Farmer and Crop Responses to Different Sources of Fertilizers: A Farmer-managed Study on Home Gardens EXPERIMENT NO.: 3108 and 3508 RESEARCHERS:'Fahmuddin -Agus, Carol J. Pierce Colfer Stacy Evensen and Sholeh OBJECTIVES: 1) Observing farmers' responses to different sources of fertilizers (organic and inorganic), 2) Introducing other varieties of crop that can be produced on home gardens, and 3) Comparing crop responses to different kinds of fertilizers in relation to costs. A fourth objective---working collaboratively with women farmers, in a more effective manner---was not realized. SOIL: This experiment was initiated with 13 cooperator farmers in Sitiung V (Aur Jaya) and in Sitiung I (Piruko). Each farmer was considered to be a farmer-managed replication. The soil in Sitiung I is predominantly Inceptisol and -in Sitiung V Ultisol. The fertility of plots within' each farmer's field and from one farmer- to another was quite variable, due to the-different. -. methods of soil-management and to the various crops that had been planted on any particular spot. EXPERIMENTAL DESIGN: Treatments tested in this study were: Treatment 1 = Control; no fertilizer application. Treatment 2 = 10 t/ha of barnyard manure (broadcast). Treatment 3 = 10 t/ha compost (broadcast) Treatment 4 = (kg/ha) 100 Urea, 125 TSP, 125 KCI, and 80 kieserite (MgS04), (broadcast) Treatment 5 (5 replications) = 25t/ha fishpond sediment (averaging 73 % water content on weight basis; broadcast) These treatment were arranged in a Randomized Complete Block design with 7 Javanese farmer-managed replications in Sitiung I and 6 in Sitiung V (all Sundanese). CROPS: Chili (Capsicum annuum L.) was planted in September 1985, and bambara groundnuts (Voandzeia subterrania) were planted in March 1986. Chili seedlings, bambara nut seeds and pesticide were distributed, in addition to inorganic fertilizers, compost, and in Sitiung V, barnyard manure. Any farmers' responses, complaints and comments were recorded, in addition to a pre- planned set of parameters. RESULTS: I. Farmers' responses to the crops 1. Chili When the farmers were asked about what kind of crop they would like to grow for the experiment, their initial response was almost unanimous: "It's up to you." Since one of our main interests was farmer input, we tried again. We presented several crops such as chili, tomato, bambara groundnuts, spinach, wing bean, etc., to the farmers as possibilities, and asked them to choose again. This time they chose chili. Chili is the most common spice (appetizer), normally eaten by the transmigrants with rice and vegetables (transmigrants don't eat much meat on a daily basis). The amount of chili consumption is variable among individuals, families, and ethnic groups, but from a global perspective, is probably high for almost everyone in West Sumatra. Despite its ubiquity in the diet, it is not widely grown among transmigrants except for a few Sundanese who were were growing lots of it in large plots in their home gardens in Sitiung V in 1983-84. Some simply preferred growing other food crops, and some said they had no experience growing chili and hadn't thought of doing so. They were interested in trying it, given our provision of supplies and guidance. From a nutritional viewpoint, chili contains mostly-y- carbohydrates and vitamins (especially vitamti A). Although its hot taste requires that it be eaten in fairly small amounts, it may provide small amounts of Vitamin A (important, since blindness caused by vitamin A deficiency is a widely recognized "Indonesian health problem). Its price fluctuates widely. In early 1984 its price was as high as Rp. 4,000 per kg. This figure can also drop as low as Rp. 500 per kg (in February 1986, US $ 1 = Rp. 1126) 2. Bambara groundnuts The method of choosing this crop was similar to the way we chose chili. Only after giving some alternatives, did the farmers spontaneously choose bambara nuts. The reason being was that they just like the crop and even though a few of them in Sitiung V planted it, it was difficult for them to find the seed. Bambara groundnuts are normally planted only in home gardens, by a few Javanese and Sundanese transmigrants. Usually only small amounts of land are devoted to this crop, and its purpose is home consumption. The market for this crop is undeveloped in Sitiung, since most of the indigenous communities, where the markets are centered, are completely unfamiliar with it. This, as well as the difficulty in getting seeds, may be the reason that farmers don't grow it in large quantities. However, although the local market for bambara groundnuts is limited now, the nutritional benefits of its cultivation make it an excellent home garden crop. Bambara groundnuts are a rich source of protein, carbohydrates, and iron and can substantially enhance the family's diet. They are also easy to harvest and prepare. II. Farmer response to treatments Most cooperator farmers readily recognized the usefulness of inorganic fertilizers, compost, and barnyard manure. 'Many of them were curious when we introduced the use of fishpond sediment. They anticipated that it would not be very different from soil, and would require a lot of work to apply. Furthermore, only half of the cooperator farmers had fishponds on their home gardens. Many farmers even preferred the control treatment to the sediment treatment. The farmer's practice of neither feeding their fish nor fertilizing their ponds (Dudley and Hidayat 1986) meant that the pond sediments were usually low in nutrients. This low nutrient content was reflected in low crop yields (Table 2). At the time of this study, inorganic fertilizer was the most common soil amendment in Sitiung VC. This was primarily because the farmers did not have ruminants which could supply large amounts of manure. Another important factor was that government fertilizer recommendations focused almost exclusively on inorganic fertilizers. Despite not having a good source of manure, the farmers recognized its value. Manure use-is very common on Java, where these farmers learned to farm. Compost is a kind of organic fertilizer which needs some work prior to application. This treatment was considered to be 'time consuming. However, most of the farmers also believed that this amendment would be good for crops, in general. Their response leads us to believe that there is still a hope for composting, if we can introduce a simplified method of compost preparation. A few farmers prepared compost in a garbage well. Separating plastic and other nonweatherable materials, the farmers dumped garbage from the kitchen and from the home garden into the well. They let the material rot and used it whenever they wanted it. Although this is a very simple method, it is impossible to produce a large amount of compost this way, so only a limited kind and number of crops can benefit from it. In Sitiung I A, manure was used almost as widely as inorganic fertilizers. Manure might even surpass the popularity of purchased fertilizers there. Almost every family in Sitiung I had ruminant livestock. Manure, along with inorganic fertilizers, was often applied, particularly to paddy rice. There were more trees in Sitiung I home gardens, probably because of longer residence. Selected kinds of trees in Sitiung I had also been fertilzied with manure, dependent on its availability. Farmers have less open space and thus devote less attention to annual crop production. Trees, like coconut, clove, jackfruit and coffee dominated these home gardens. Part of each farmer's home garden was devoted to a stable for cattle and goats and, on some, a fishpond or two. Almost every farmer raised chickens. Sitiung I farmers showed less interest in compost than did Sitiung V farmers saying that since they had manure, there was no need to bother with making compost. Even if there are more fishponds in Sitiung I, farmers still think applying the fishpond sediment would require a great deal of work. Although they occasionally dig out the fishponds, the sludge is simply piled along the edge of the ponds. Coconuts and other trees grew along the edges of many fishponds. Table 1 shows the farmers' rankings of the various treatments. III. Crop response to treatments Statistical analysis in Table 2 shows that, for chili, manure, inorganic fertilizers, and compost gave significantly higher yield than either the fishpond sediment or the control. (NOTE: The fishpond sediment treatment had fewer replications than the others) For bambara groundnuts, inorganic fertilizers produced the highest yield of nuts. Compost and manure were not significantly different from each other, but they were significantly lower than inorganic fertilizers, and significantly higher than either the control or the fishpond sludge. IV. Economic realities Since there were significant effects of treatments on yields, the data is more meaningful if economic analysis is done in addition to statistical analysis. Tables 3 and 4 show the summary of partial budget analysis. Table 3 shows that for chili, inorganic fertilizer application, at the rate applied in this treatment, gave a very promising MRR (1332 %). This means that when we invest an additional Rp. 61,000 to apply the inorganic fertilizer, we receive 1332 Z Rp. 61,000 more in addition to the Rp. 61,000 we invested. And if more money (Rp. 55,000) is invested to change inorganic fertilizer treatment to manure treatment, we will receive 283 Rp. 55,000 more in addition to the Rp. 55,000. From Table 4, it can be seen that inorganic fertilizer application was the only economically viable treatment (with iRR of 750 %). Other treatments were dominated by this treatment. This means that additional money invested for other treatments did not give an increase on net benefit. However, long term effects of fertilizer use versus organic matter use on soils has not been studied. CONCLUSIONS AND SUGGESTIONS: 1. Chili and bambara ground nuts seem to be promising crops for cultivation on home gardens. These two crops gave significant responses to barnyard manure, compost, and inorganic fertilizers at the rates applied in this trial. The application of fishpond sediment (from ponds with such low levels of management and at the rates of sludge applied) did not significantly improve yields over no soil amendment at all (the control treatment). Farmers also preferred the first three treatments over the other two. 2. From the viewpoint of production only, manure, inorganic fertilizers and compost gave good yields to chili and bambara nuts. However if cost-benefit is taken into account, we would not recommend these treatments equally. For chili, manure application was the most economical (assuming the "Iinimum Rate of Return of 100 %). Inorganic fertilizer application is still recommendable in cases of capital/labour shortage for manure. For banbara nuts, a inorganic fertilizers application was the most economical. Statistically, manure and compost treatments were significantly higher than control and fishpond sediment treatments. However, since they needed higher input and resulted in less net benefit than inorganic fertilizer application, these two treatments were not economical. Table 1. Farmers' preference to different sources of fertilizers. ----------------------------------------------------------- Rank Number of farmers preferring various treatments Treatment 1st 2nd 3rd 4th 5th --------------------------------------------------------- Control 0 0 0 11 2 Compost 2 1 9 0 0 Manure 6 5 2 0 0 Fertilizer 5 7 1 0 0 F. Sediment 0 0 0 2 3 --------------------------------------------------------- Table 2. Effects of different sources of nutrients on chili and bambara groundnut production. --------------------------------------------------------- Treatment Average Yield (kg/ha) *) Chili Bambara nuts --------------------------------------------------------- Control 860 b 2900 c Compost 1677 a 3860 b Manure 1883 a 3810 b Fertilizer 1682 a 4180 a Fishpond sediment 875 b 2960 c --------------------------------------------------------- *) Average of thirteen replications, except for the fishpond sediment treatment which is average of seven treatments *") Any two means having a common letter are not signi- ficantly different at the 5 % level of significance using DiRT. CV = 44 % CV(chili) = 28 % (bambara nuts) Table 3. Summary of partial budget calculation for chili under this experiment *) --------------------------------------------------------------- Treatment Ay (kg/ha) GB (Rp) NE (Rp) TCV (Rp.) KRR (Z) -------------------------------------------------------- Control 731 913,750 913,750 0 F. sediment 744 930,000 S70,000 60,000 d Fertilizer 1,430 1,787,500 1,726,500 61,000 1,332 Manure 1,601 2,001,250 1,885,250 116,000 288 Compost 1,425 1,781,250 1,583,250 198,000 d ---------------------------------------------------------------- *) See Appendix 1 for the detail of partial budget analysis "d" in the column of MRR means dominated. A treatment is dominated when its TCV is higher but its MB is lower then another treatment. Table 4. Summary of partial budget calculation for bambara nuts under this experiment *) Treatment Ay (kg/ha) GB (Rp) NB (Rp) TCV (Rp.) HRR (Z) Control 2,465 616,250 616,250 0 Fertilizer 3,553 880,250 856,250 32,000 750 F. sediment 2,516 629,000 569,000 60,000 d manure 3,293 823,250 707,250 116,000 d Compost 3,281 820,250 622,250 19C,000 d *) See Appendix 1 for the detail of partial budget analysis "d" in the column of MRR means dominated. A treatment is dominated when its TCV is higher but its ND is lower then another treatment. Note: US $ 1 = Rp. 1,126. Ay = Adjusted yield GB = Gross benefit NB = Net benefit TCV = Total cost-that vary ,RR = Marginal rate of return Appendix 1. Partial budget analysis for chili and banbara nuts under this experiment This appendix is to explain Table 3 and Table 4. Further explanation on the way we analyze partial budgets can be studied from Harrington, 1C5. 1. Adjusted yield. Yield was adjusted because researcher managed harvest areas were very small (10 m2). This adjustment was done based on researchers' judgement of yields biased if this experiment were done in a large area. In this experiment, yield was adjusted 15 % downward or: Adj. Y = .85 Average yield Average yield is the average of yields per locations as in Table 2. 2. Gross benefit was obtained from the following formula: GB = Adj. Y Field price Field price (Rp/kg) = Karket price Yield related cost Yield related costs are such costs as cost of harvesting, shelling, transportation, drying, etc. This cost is very dependent on the amount of yield. a. Field price for chili: market price = Rp. 2,000/kg Yield related costs: Harvest (opportunity) cost = Rp. 215/kg Transportation and handling cost = Rp. 35/kg Damage in storage and transportation = 25 % Rp 2000 (estimation) = Rp. 500/kg All yield related costs = Rp. 750/kg Field price = Rp. 1250/kg b. Field price for bambara nuts: ilarket price = Rp. 500/kg Yield related costs: Harvest (opportunity) cost = Rp. 50/kg Shelling (opportunity) cost = Rp. 50/kg Transportation and handling cost = Rp. 25/kg Damage in storage and transportation = 25 % Rp 500 (estimation) = Rp. 125/kg All yield related costs = Rp. 250/kg Field price = Rp. 250/kg 3. Net benefit = Gross benefit Total cost that vary. 4. Total costs that vary (TCV) is the difference in cost between each treatment and control treatment. Cost that vary can come from the fami-ly. This kind of cost is termed opportunity cost. TCV (control treatment)= 0 TCV (compost treatment) : Compost in this treatment was made from 7.5 tons of grass and 5 tons of barnyard manure for a hectare of land. Forty two man days were needed-for grass (shrub) collection and compost preparation. tle also used wooden boxes for the compost preparation. The wood used for the box was the unsaleable wood cuttings from a nearby sawmill, so that the (opportunity) cost for a box was Rp.6000. Eight boxes of this kind were needed to fulfill the need of a hectare of land. Eight mandays were needed to apply compost. Five tons of manure had the (opportunity) cost of Rp. 50,000. TCV for compost was Rp. (50*2,000 + 8*5,000 + 50,000) = Rp. 19,000. TCV (manure treatment): 10 tons of manure =Rp. 100,000 (Opportunity) cost of application (including carrying to the field) was equivalent to 3 man days = Rp. 15,000. TCV manure = Rp. 116,000. TCV (Fertilizers): 1. Chili 100 kg Urea @ Rp. 100/kg = Rp. 10,000. 125 kg TSP @ Rp. 100/kg = Rp. 12,500. 125 kg KC1 0 Rp. 100/kg = Rp. 12,500. 80 kg MgS04 0 Rp. 300 = Rp. 24,000. (Opportunity) cost for application 1 man day = Rp. 2,000. TCV fertilizer = Rp. 61,00G 2. Bambara groundnuts. No MgS04 was applied for bambara groundnuts and residual Mg from the first crop was neglected. Urea was applied at the rate of 50 kg/ha. 50 kg Urea @ Rp. 100/kg = Rp. 5,000. 125 kg TSP @ Rp. 100/kg = Rp. 12,500. 125 kg KC1 @ Rp. 100/kg = Rp. 12,500. (Opportunity) cost for application 1 man day = Rp. 2,000. TCV fertilizer = Rp. 32,000. TCV (Fishpond sediment treatment): Only (opportunity) cost for labor can be calculated since there is no market for fishpond sediment. (Opportunity) cost for digging and applying the sediment was equivalent to 30 man days = Rp 60,000. This is TCV for fishpond sediment treatment. 5. Marginal rate of return (1iRR) is the percentage of NB increase divided by TCV increase. Investment can be done if ilRR exceeds minimumm Rate of Return (iRR). 5. Minimum Rate of Return is the sum of Cost of Borrowed Capital and Returns to management. In Indonesia, as well as in other countries where capital is scarce, the MRR is most probably 100 Z or so. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs
|
http://ufdc.ufl.edu/UF00080610/00001
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
#include <iostream> #include <string> using namespace std; int main() { string names[4] = {"Anna" , "Jenny", "George" , "Michael"}; int score[4]; for(int i = 0; i < 3; i++) { cout << names[i] << ": "; cin >> score[i]; cin.ignore(); } //sort by score for ( int i = 0; i < 3; i++ ) { score[i]; for ( int j = 0; j < 3; j++ ) { if( score[i] < score[j] ) { string tmp_string; int temp; temp = score[i]; tmp_string = names[i]; score[i] = score[j]; names[i] = names[j]; score[j] = temp; names[j] = tmp_string; } } } for ( int k = 0; k < 3; k++ ) { cout << endl; cout <<names[k] << " = "<< score[k] << " "; } system("pause"); }
I've compiled it (with dev-c++), and it runs well.
I just need a few minor things.
1. I need it to sort descendingly, not ascendingly.
2. I need to know how to either print the output, or get the output sent to a word processor so it can be printed.
3. When the output is displayed, is there a way to clear the screen before-hand, so the input is cleared?
--thanks a million, guys.
|
https://www.daniweb.com/programming/software-development/threads/90543/what-i-have-so-far
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I have a palindrome set up in the system, and running fine.
However, when I type "never odd or even" or "A Man, A Plan, A Canal, Panama", it didn't run properly as if these words in quotation marks are considered palindromes. Where is the loophole on my code re: palindromes esp more than one word
//This palindrome project //requires for a determination and provide a better understanding //of such concept in C++ project. #include<iostream> #include<cstring> #include<cstdio> using namespace std; int main() { char word[30],rev[30],chr; cout<<"\t\tMy System.\n"; cout<<"\t\t\tMy Version\n\n"; do { cout<<"\nPlease enter word or number: "; cin>>word; int i, j; for(i = 0, j = strlen(word) - 1; j >= 0; i++, j--) rev[i] = word[j]; rev[i] = 0; if(strcmp(rev,word)==0) cout<<"The word "<<word<<" is considered a Palindrome."; else cout<<"The word "<<word<<" is not considered Palindrome."; cout<<"\n\nDo you want to try again? Please press Y or N: "; cin>>chr; } while(chr=='y'||chr=='Y'); cin.get (); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/287985/determine-whether-is-palindrome-or-not
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Proposed features/camp site pitch
Contents
Tagging of individual pitches
One might be interested in tagging individual pitches within a campground. Pitch means (in this context) the free space used to place 1 tent or 1 caravan..
This proposal builds on the earlier work in tagging individual pitches at Proposed features/Extend camp site but changes tag names to be more consistent in the use of namespace conventions and to use tag names consistent with current practices.
Tag Conventions
To avoid confusion (Who gets confused ?) between the sporting use of the word pitch (see leisure=pitch), a place for pitching a tent or parking a caravan is called a "camp pitch" (camp_site=camp_pitch) and the facilities at that pitch that are dedicated to the use of the pitch occupants use a camp_pitch:*=<value> name space tagging system.
There may be items like tables, water supply, fire rings associated with the pitch. If the item is specifically for the use of the occupant of the pitch, then use pitch name space specific tags. If the item is shared by multiple pitches or by the campground as a whole then it should be tagged using a more general tag.
Tags within the camp_pitch:* namespace follow, as closely as possible, the naming used for the equivalent amenity if it were not dedicated to the pitch occupants. For example, a publically available supply of drinking water is normally tagged amenity=drinking_water so camp_pitch:drinking_water=yes/no is used to indicate whether or not there is an exclusive drinking water supply for that pitch.
Tagging
A camp pitch is tagged either as a point, located at the pitch identifying post or sign, or a way around the boundary of the pitch. The following tags are placed on the point or way:
1) Why the need for camp_site=camp_pitch ? Why not: camp_pitch=ref;addr;type;parking;table;stove;surface;electric;water;drain where if the item is listed 'yes' is default.
2) Grass pitches may move - to allow grass to recover from being camped on.
3) It should be clear that the part after "camp_pitch" should not just be made up. Those are OSM tags mentioned elsewhere. Brycenesbitt (talk) 23:26, 1 May 2015 (UTC)
|
http://wiki.openstreetmap.org/wiki/Proposed_features/camp_site_pitch
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
Hi,
I am about to port my application from PRISM to nRoute and would like to know what the best approach is for implementing a wizard. The wizard will have one view model and multiple views. I am going to use the future desktop as a base. The wizard will
not be in a popup, it will appear in the main content area of the future desktop. Also I would potentially like to have the next previous finish wizard navigation buttons appear on the bar that you have the add workspace, refresh favourite button
on the future desktop example. Finally if possible I would like to have a general wizard control which I can pass views/viewmodel and then the navigation buttons for the wizard should appear and operate correclty when a wizard is active within the main container/content
area.
Thanks in advance for any help.
Now, there are many ways to do this, some options are:
1. One parent View and VM, along with multiple child Views:
Start with one set of View and VM, in the View just put a NavigationContainer, and then within the container you can have your Wizard Pages/Views flow. Importantly, the Wizard Pages/Views shouldn't specify a VM, this way they would inherit the parent VM.
So what you get is one ViewModel paired with Multiple Wizard Pages/Views.
2. One VM, and Multiple Mapped Views
Basically, just map one or more Views to different Urls, and then have the Views map to a single VM, something like:
[DefineNavigationContent("Pages/Wizard/Page1/", typeof(Page1))]
[DefineNavigationContent("Pages/Wizard/Page2/", typeof(Page2))]
[DefineNavigationContent("Pages/Wizard/Page3/", typeof(Page3))]
[MapViewModel(typeof(Page1))]
[MapViewModel(typeof(Page2))]
[MapViewModel(typeof(Page3))]
public class WizardViewModel
{
// do the do
}
3. One View and VM, with a Url Token
If possible I'd prefer this, rather than having multiple views just have a parametrized Url. This way you can change the View per the value of Url parameter/token:
[DefineNavigationContent("Pages/Wizard/{StepIndex}/", typeof(WizardPage))]
[MapViewModel(typeof(WizardPage))]
public class WizardViewModel : NavigationViewModelBase
{
// get the value StepIndex by overriding OnInitialize method
}
As you can tell, all the above options aren't standardized like a Wizard control, which unfortunately I can't provide out of box with nRoute. However, you can create one yourself, the NavigationContainer model is very flexible and can provide you
with all the required infrastructure, you just need to layer the Wizard-specific functionality onto it.
Hope this helps,
Rishi
PS: I'd recommend use of StatefulBrowsingNavigationContainer, read more about it here
Thank you. I will give it a try.
Hi Orktane, sorry but I am struggling to know how to start (this is the very first part of my app that I am porting over to nRoute). Would you be able to explain option 1 in more detail please?
How do I add the wizard pages to the container? To start I would like to have one viewmodel and one view. The View will have an area for displaying the current wizard step/view, the wizard navigation buttons (Previous, Next, Finish and Cancel) and ideally
tabs for each wizard step with the current step highlighted/selected. Do containers support tabs, can you add a number of views toa container and then make it a tabbed container? If not can I add all the pages to the container and then somehow say which
one should be visible based on the current navigation step/index?
Also how can I access the navigation container from within the view model when I look at the static methods for NavigationService I only see a method for getting the default container. Is there some way of accessing container by name such as "WizardContainer"
and then I could set which view within the container is active/displayed? Also you mention that if a view is added to the container and the view doesn't have a view model sepcified then it will be given the parent view model, will this happen automatically?
Let me try answer your questions, point-by-point:
Q. How do I add the wizard pages to the container?
A. Well, you don't have to add them all at once, that's what you use navigation for. So like navigate from one page to another just use the NavigateAction behavior.
Q. Do containers support tabs?
A. Well, you must understand that a "navigation container" is something that can handle a navigation request/response (see the INavigationHandler interface). So tabs natively are not navigation containers, though minimally by either implementing INavigationHandler
by inheriting it or by creating a Navigation Adapter for it you can make it handle navigation (see for
an example on how to create adapters)
Q. About using Tabs?
A. Personally, I wouldn't use tabs for creating a Wizard control but you could. The problem with the tabs control is that tends to re-load pages when switched on/off a tab. You can create a similar effect/solution by just using the provided containers -
as all you'll be doing is to show one view at a time.
Q. Also how can I access the navigation container from within the view model?
A. You don't - in VMs you keep away the View related stuff. Also if you know how containers work (just like web browsers), when you navigate (like when you click a hyperlink) within a container it just picks up the container itself by walking up the
visual tree, unless you specify another container.
Q. Is there some way of accessing container by name?
A. Yes, see the "Globally Named Navigation Containers" section
Q. Also you mention that if a view is added to the container and the view doesn't have a view model sepcified then it will be given the parent view model, will this happen automatically?
A. Yes. And that's the setup I talked about in option 1. So you parent view will look something like:
<UserControl ...>
<i:Interaction.Behaviors>
<n:BridgeViewModel />
</i:Interaction.Behaviors>
<n:NavigationContainer
</UserControl>
As you can see above we've initialized the container to start with the url "Pages/MyWizard/Page1" - that Url maps to a UserControl (say Page1.xaml) which has been earmarked with the MapNavigationContent attribute:
[MapNavigationContent("Pages/MyWizard/Page1")]
public partial class Page1 : UserControl
{
//..
}
Now, importantly, this page doesn't have it's own VM - so it would inherit the parent's VM. So Page1.xaml could navigate to Page2.xaml which would be the second screen in your Wizard, and Page2 would be mapped with the Url like "Pages/MyWizard/Page2".
So basically all you need to do is navigate from one Url to another, and you should get your wizard going.
Hope you got the basic idea?
Rishi
Hi Rishi,
Thanks for the reply but I still don't see how the navigation would work between the different wizard pages. Are you suggesting that each wizard step/view has its own navigation buttons rather than having the navigation buttons within the general wizard
control? What I am unsure of is when the user is on Page 1 how do they get to Page 2, where is the url for page 2 defined? I would like to avoid having the to add wizard buttons to every view that is going to be a wizard step.
In my current prism implementation I have a general wizard control view/view model. I then add each wizard step/view to a Pages Collection (each page has the same viewModel). The wizard view has the navigation buttons and one region for displaying
the current wizard step. within the view model I then add the current wizard step to the region. The Next, Previous, Finish and Cancel buttons are bound to commands within the wizard viewmodel. I then need to know what buttons should be enabled based
on the current wizard step. so previous wouldnt be enabled when on the first step etc.
Here is an example below of the same general wizard being used in 2 different instances. The idea being that the CustomerViewModel and customer views and WidgetConfigurationViewModel and widget configuration views know nothing about the wizard.
Is it possible that I am trying to use nRoute in a way that it has not been designed for?
//Customer wizard
WizardViewModel wizardViewModel = new WizardViewModel();
wizardViewModel.Pages.Add(NewCustomerStep1View);
wizardViewModel.Pages.Add(NewCustomerStep2View);
wizardViewModel.Pages.Add(NewCustomerStep3View);
//Widget configuration wizard
WizardViewModel wizardViewModel = new WizardViewModel();
wizardViewModel.Pages.Add(WidgetConfigurationStep1View);
wizardViewModel.Pages.Add(WidgetConfigurationStep2View);
wizardViewModel.Pages.Add(WidgetConfigurationStep3View);
All Classes are:
Let me do a sample for you, will post it after work.
Cheers,
Rishi
Sorry to pester you but is there any update on the sample?
Well, sorry got really caught up in work during the week - anyway, so rather than one I've done two types of wizards. One with a single backing VM, and another with multiple VMs. Download the code from
Let me know if it works for ya.
Hi Rishi, thanks for the samples they have helped me port over the existing wizards.
I have implemented the wizard and its working great. However one final question on the subject, I would like to move the Navigation buttons next, previous etc out of the wizard
general view into the application shell view (to save space). So my question is can I associate two VMs with one view so the shell View would have the Shell VM and Wizard VM associated with it when the wizard is active so that I can bind the Wizard view
model navigation ActionCommands to the buttons. I think the answer to this is no, but would like to double check.
If this isn’t possible do you think an appropriate solution would be to use the eventAggregator/pub sub mechanism: When the wizard is active it publishes a message to say
that it is active and the shell view model subscribes to this and displays the navigation buttons. When a navigation button is then clicked the Shell VM publishes a message to say that "Next" button etc has been clicked and the Wizard
VM subscribes to these messages and acts appropriately?
@Ultramods, did you have a look at the Widgets wizard (not the customers one) - it used nRoute's event aggregrator/pub-sub mechanism (called Channels) to publish the responses. And the next/back/forward buttons are separate from each
step View. Also, you can't have two VMs for a View, at least, not in the normal way we interpret VMs, and even if you did have two VMs you'll need to choose one to apply at runtime. However, you can reach into another VM via a relay or something.
Rishi
Ok thanks, I will use that approach.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://nroute.codeplex.com/discussions/225598
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Sidnei da Silva wrote: > On Thu, Jan 26, 2006 at 02:02:19PM +0000, Chris Withers wrote: > | Sidnei da Silva wrote: > | > > | >My original intention was to put the config file location in the > | >ZConfig schema, but that's *waaaay* too painful right now. > | > | What's the specific problem here? I find adding to ZConfig schemas > | pretty easy... > > Yet you find ZCML declaring namespaces in ZCML files > annoying *wink*. Sometimes I don't understand you :)
Advertising
+1 to that. I think Chris doesn't really believe in the Second Law of Python (according to the prophet Peters). > Tres' ZConfig products extension thing just enables you to use > key-value pairs, without being able to specify what the datatypes for > those values are, if I'm not mistaken. That was the first thing Fred and I checked in. We then added (later that morning) the ability to declare new abstract section types: > Also, AFAICT it's not present in Zope 3 yet. The changes were to Zope2-specific schema stuff; I'm not even sure where they would fit into Zope3. > What I think is that the Zope ZConfig schema should have something > just like 'package-includes' is for ZCML. A place where you can drop a > snippet of ZConfig schema and it will get picked up at boot time. Tres. - -- =================================================================== Tres Seaver +1 202-558-7113 [EMAIL PROTECTED] Palladion Software "Excellence by Design" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - iD8DBQFD2X5n+gerLs4ltQ4RAl/GAJ90Nv7VnBVFnL4G4RF+6lWiEtoQ5wCggnAc 53PlWd1z7wOaGB51dAM6yjE= =YFbN -----END PGP SIGNATURE----- _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
|
https://www.mail-archive.com/zope3-dev@zope.org/msg03653.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.