text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
OsiObject for complementarity constraints . More... #include <CouenneComplObject.hpp> OsiObject for complementarity constraints . Associated with two variables and , branches with either or Definition at line 22 of file CouenneComplObject.hpp. Constructor with lesser information, used for infeasibility only. Destructor. Definition at line 37 of file CouenneComplObject.hpp. Copy constructor. Cloning method. Reimplemented from Couenne::CouenneObject. Definition at line 43 of file CouenneComplObject.hpp. References CouenneComplObject(). compute infeasibility of this variable, |w - f(x)| (where w is the auxiliary variable defined as w = f(x) Reimplemented from Couenne::CouenneObject. compute infeasibility of this variable, |w - f(x)|, where w is the auxiliary variable defined as w = f(x) Reimplemented from Couenne::CouenneObject. create CouenneBranchingObject or CouenneThreeWayBranchObj based on this object Reimplemented from Couenne::CouenneObject. -1 if object is for xi * xj <= 0 +1 if object is for xi * xj <= 0 0 if object is for xi * xj = 0 (classical) Definition at line 64 of file CouenneComplObject.hpp.
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_compl_object.html
crawl-003
en
refinedweb
"Spatial" branching object for complementarity constraints. More... #include <CouenneComplBranchingObject.hpp> "Spatial" branching object for complementarity constraints. Branching on such an object x_1 x_2 = 0 is performed by setting either x_1=0 or x_2=0 Definition at line 24 of file CouenneComplBranchingObject.hpp. Copy constructor. Definition at line 43 of file CouenneComplBranchingObject.hpp. cloning method Reimplemented from Couenne::CouenneBranchingObject. Definition at line 49 of file CouenneComplBranchingObject.hpp. References CouenneComplBranchingObject(). Execute the actions required to branch, as specified by the current state of the branching object, and advance the object's state. Returns change in guessed objective on next branch Reimplemented from Couenne::CouenneBranchingObject. use CouenneBranchingObject::variable_ as the first variable to set to 0, and this one as the second Definition at line 63 of file CouenneComplBranchingObject.hpp. -1 if object is for xi * xj <= 0 +1 if object is for xi * xj <= 0 0 if object is for xi * xj = 0 (classical) Definition at line 68 of file CouenneComplBranchingObject.hpp.
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_compl_branching_object.html
crawl-003
en
refinedweb
Choose a variable for branching. More... #include <CouenneChooseVariable.hpp> Choose a variable for branching. Definition at line 27 of file CouenneChooseVariable.hpp. Constructor from solver (so we can set up arrays etc). Copy constructor. Destructor. Definition at line 48 of file CouenneChooseVariable.hpp. Assignment operator. Clone. Definition at line 44 of file CouenneChooseVariable.hpp. References CouenneChooseVariable(). Sets up strong list and clears all if initialize is true. Returns number of infeasibilities. If returns -1 then has worked out node is infeasible! Returns true if solution looks feasible against given objects. choose object to branch based on earlier setup Add list of options to be read from file Pointer to the associated MINLP problem. Definition at line 72 of file CouenneChooseVariable.hpp. journalist for detailed debug information Definition at line 75 of file CouenneChooseVariable.hpp.
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_choose_variable.html
crawl-003
en
refinedweb
hierarchy of mask of various shapes. More... #include <vcl_iosfwd.h> #include <vnl/vnl_vector.h> #include <vil/vil_image_view.h> #include <rgrl/rgrl_object.h> #include <rgrl/rgrl_macros.h> Go to the source code of this file. hierarchy of mask of various shapes. Disregarding the shape, each mask also provides a bounding box. Denoting the upper left corner as x0, and bottom right as x1 (in 2D case), the bounding box is defined by a tight interval [x0, x1] on all dimensions. Modifications Oct. 2006 Gehua Yang (RPI) - move rgrl_mask_3d_image into separate file Definition in file rgrl_mask.h. An output operator for displaying a mask_box. Definition at line 208 of file rgrl_mask.cxx. An output operator for displaying a mask_box. Definition at line 216 of file rgrl_mask.cxx. Intersection Box A with Box B (make it within the range of B). Definition at line 236 of file rgrl_mask.cxx.
http://public.kitware.com/vxl/doc/release/contrib/rpl/rgrl/html/rgrl__mask_8h.html
crawl-003
en
refinedweb
This is the first installment of a numerous part article on doing N-Tier development with Microsoft Visual Studio .Net. In retrospect DNA will be examined and basically expanded as a starting point. In the world of .Net comparisons will be made with code examples contained in the projects in the TVanover.zip file that demonstrate the proof of concept of this series of articles. The purpose of this article is to examine a proof of concept on an architecture that follows the DNA pattern on concept only. It will cover each of the layers, some more in depth than others to expand possibilities to .Net developers.DNA ArchitectureIf an alternative definition of DNA was brought to layman terms it would be scalability. Microsoft invented this concept where developing applications in function specific layers or tiers that an application could be scaled endlessly and maintained with low total cost of ownership. These tiers include Presentation, Business, Data Access, and Data Storage. The splitting of tier creates boundaries and allows encapsulation of specific functionality from tiers that should not have knowledge of other tiers. For example the UI in the Presentation tier should never have direct database access or access to the Data Access layer, but should only have access to the Business Logic.Scaling UpThis defines an application in its relationship to where it resides on hardware. An application that has logically separated tiers can have multiple tiers on the same server. This type of application design is limited to the amount of memory and processors that can be applied to the hardware. This is scaling up. Figure 1 shows all tiers located on a single application server and is an example of logically separated N-Tier development.Scaling OutSplitting up the layers to different physically separated tiers allows scaling out. In this paradigm the tiers reside on different servers and allow more servers to be added so that load balancing and clustering can handle a larger capacity of simultaneous users for the application. Figure 2 shows a bit more complex example of this concept. In this paradigm more web servers and application servers can be added to facilitate more simultaneous users.Figure 3 shows a Visual Studio .NET solution for physically separated middle tier components. Notice the difference in that there are no proxies brought to the web server from the business tier as in the case of the DNA COM architecture shown in Figure 1. Instead a contract is brought through the web application through Simple Object Access Protocol (SOAP) that basically takes the place of DCOM and proxies generated by middle tier COM components. A Proxy class can be generated that will allow asynchronous access to the web service by using the Wsdl.exe.This article will focus on physical separation of tiers that will enable the scalability that is required in enterprise web based applications. This installment will focus on the Data Access Layer and the following articles will continue walking up the tiers until the entire proof of concept application is constructed.COM+ Data ComponentsMTS and COM+ allowed for the creation of rich middle tier components that abstracted the data and business logic from the user interface and centralized the maintenance of that encapsulated logic. Developers had several options in connecting to data through the command objects created in the data tier to execute stored procedures on SQL Server databases. One of the drawbacks in creating data access components has been an issue of maintainability of the components as stored procedures change. When the parameters change in a stored procedure the data access components must change and be recompiled to accommodate the new parameter, the removed parameter, the data type change, or the order of new parameters. Thus there exists an implied dependency between data access components and the actual stored procedures that reside in the DBMS. Such practices in object design and creation do not support object reuse and subsequent recoding basic logic frequently occurs. The focus will on these issues and a more flexible architecture and design utilizing the features that are offered in the Microsoft .NET framework will be illustrated.Hard Coded ParametersIn general hard coding the parameters is most efficient in terms of performance in creating the command object parameters collection used in the old world of ADODB and COM. This was also a maintenance issue throughout the development cycle as fields and parameters changed in the database to accommodate the ongoing project development cycle. Large applications that had several data access libraries containing multiple classes defining different entity based methods could result in thousands of lines of code dedicated to creating the parameters and setting up their proper types, sizes, direction, and precision. Each time parameters were added, removed or changed, compatibility was broken between the business components and the data access components as well as the data access components and the stored procedures. In some situations, you may add a parameter or change the order or a parameter which could then result in a logic error, thus the compilation was successful but at runtime you would get data corruption, type mismatches, overflows, any number of unexpected errors, or difficult to find bugs. Listing 1 shows typical code in Visual Basic 6.0 using hard coded parameters of the ADODB Command object to execute communication with the database. Listing 1 Hard coded parameters With cmProduct.ActiveConnection = cnProduct.CommandText = "usp_ProductUpd".CommandType = adCmdStoredProc'@ProductID parameterprmProduct = .CreateParameter("@ProductID", _adInteger, adParamInput, , ProductID).Parameters.Append(prmProduct)'@ShelfID parameterprmProduct = .CreateParameter("@ShelfID", _adInteger, adParamInput, , ShelfID).Parameters.Append(prmProduct)'@BinID parameterprmProduct = .CreateParameter("@BinID", _adInteger, adParamInput, , BinID).Parameters.Append(prmProduct).Execute()End WithParameters Refresh The ADODB Command objects parameter collection contained a refresh method that allows developers to dynamically create parameters. This helped overcome the maintainability issues involved with hard coding parameters, but at a measurable performance cost due to multiple calls on the database. When the parameters were refreshed a connection was made to the database to query those parameters, and then when the command was executed a second trip to the database was made.In the case a physical separation of tiers, this created a measurable performance hit on the application and therefore created a tradeoff of development time versus application performance. Compatibility issues were reduced when adding or removing parameters as the method parameters were passed inside a variant array. There would only be a need to add the new fields or parameters to the UI, business object, and the stored procedure and insure that the correct ordinal position of the value passed in all layers corresponded to the intended parameter in the stored procedure. Listing 2 illustrates the use of code for the refresh method.Listing 2 Parameters Refresh With oCmd.ActiveConnection = oCnn.CommandText = "usp_ProductUpd".CommandType = adCmdStoredProc.Parameters.Refresh()'zeroth parameter is @RETURN_VALUEFor intValues = 1 To .Parameters.Count - 1.Parameters(intValues) = varValues(intValues - 1)Next intValuesEnd WithXML Template AdapterThere was little stir in 2000 when several faster alternatives evolved. One of which was encapsulating the parameter definitions in XML based schema files and populating the parameters collection of the command object based on the attributes or the schema of the file XML file. Performance was gained over the refresh method and there was less maintenance associated with changes to the data model. XML templates can automatically be generated to the appropriate read directory required for the data component.A caveat is that some file IO is required on each command object being created unless the XML files were cached in IIS. Someone also had to write the new template file when changes occurred to a stored procedure. Even so this eased maintenance somewhat and the performance hit was a fair trade off for reduced project development time and easier maintenance.When parameters changed in a stored procedure there was not an issue with compatibility as the parameters were passed to the method encapsulated inside a variant array. The business and UI tiers were the only other places where changes would have to be propagated in order to have proper communication on all tiers.Listing 3 XML Template Adapter'load the attribute data from the xml file back to the parameters objectFor Each objElement In objNodeListIf lngElement = -1 Then 'write @RETURN_VALUE parameterprmUpdate = .CreateParameter(, _CLng(objElement.getAttribute(ATTRIBUTE_DATA_TYPE)),_ CLng(objElement.getAttribute(ATTRIBUTE_DIRECTION)), _CLng(objElement.getAttribute(ATTRIBUTE_SIZE)), Null)'use to keep the parameter in line with the variant arraylngElement = 0Else 'write rest of parametersprmUpdate = .CreateParameter(, _CLng(objElement.getAttribute(ATTRIBUTE_DATA_TYPE)), _CLng(objElement.getAttribute(ATTRIBUTE_DIRECTION)), _CLng(objElement.getAttribute(ATTRIBUTE_SIZE)), _Parameters(lngElement))'use to keep the parameter in line with the variant arraylngElement = lngElement + 1End If'assign precision to parameter in case of a decimalprmUpdate.Precision =CLng(objElement.getAttribute(ATTRIBUTE_PRECISION)).Parameters.Append(prmUpdate)Next 'objElement TVanover.DDL Figure 4 illustrates the architecture associated with the rest of the article installments. The top center rectangle is the TVanover.DDL or Dynamic Data Layer. This is a class library that contains both transactional and non-transactional methods that are commonly used in an application. In the Human Resource Web Service examine the SearchEmployees method shown in Listing 4. Listing 4[WebMethod(TransactionOption=TransactionOption.NotSupported)]public DataSet SearchEmployees(string firstName, string lastName){ExecDataSet oExec = new ExecDataSet();DataSet dsReturn = new DataSet("Employees"); object[] oParameters = new Object[] {firstName, lastName};dsReturn = oExec.ReturnDataSet("usp_EmployeeSearch", oParameters); return dsReturn;} This method has an attribute set that takes no transactions as selecting records should not be contained in a transaction where data is not altered. There are two parameters passed to the method for the first name and last name of the individual being sought after. An instance of the ExecDataSet class is instantiated, the two parameters are wrapped inside an object array, the stored procedure and object array are passed to the ReturnDataSet method, and it is executed to return a Dataset. Listing 5 goes inside the ExecDataSet.ReturnDataSet method. Listing 5public DataSet ReturnDataSet(string storedProc, params object[] oParameters){DataSet dsReturn = new DataSet();SqlDataAdapter oAdapter = DataAdapter.GetSelectParameters(storedProc, oParameters);try{oAdapter.Fill(dsReturn);}catch(Exception oException){if (!EventLog.SourceExists(APPLICATION_NAME)){EventLog.CreateEventSource(APPLICATION_NAME, "Application");}EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );}finally{oAdapter.SelectCommand.Connection.Close();}return dsReturn;} The params keyword in the ReturnDataSet method allows various numbers of parameters to be sent to the method and is dynamic in nature. This must always be the last in the method signature and there can only be one reference to it inside the signature. A DataSet is created, a SqlDataAdapter is created from the return of the static class method DataAdapter.GetSelectParameters, and the newly created DataSet is filled from that SqlDataAdapter. If an error occurs the application log is used to store the error. The finally code block closes the connection regardless of error or not, and the DataSet is returned to the calling method.In examining the DataAdapter.GetSelectParameters method take a closer look at Listing 6. Listing 6public static SqlDataAdapter GetSelectParameters(string storedProc,params object[] oParameters){ SqlDataAdapter adapter = new SqlDataAdapter(); try{//this method requires parameters and will //throw an exception if called without themif(oParameters == null){throw new ArgumentNullException(NULL_PARAMETER_ERROR);}else{DataTable oTable = ParameterCache.dsParameters.Tables[storedProc]; int iParameters = 0;SqlCommand oCmd = new SqlCommand(storedProc);oCmd.CommandType = CommandType.StoredProcedure; //write the parameters collection //based on the cache and values sent inforeach(DataRow oDr in oTable.Rows){ oCmd.Parameters.Add(CreateParameter(Convert.ToString(oDr[PARAMETER_NAME]),Convert.ToInt32(oDr[PARAMETER_DIRECTION]),oParameters[iParameters]));iParameters++;}//Add a return parameteroCmd.Parameters.Add(CreateParameter());oCmd.Connection = Connections.Connection(false);adapter.SelectCommand = oCmd;}}catch(Exception oException){if (!EventLog.SourceExists(APPLICATION_NAME)){EventLog.CreateEventSource(APPLICATION_NAME, "Application");}EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );}return adapter;}If no parameters are passed to this method an exception is thrown back to the caller, otherwise a new DataTable is created and is assigned the contents of a static member ParameterCache.dsParameters. This is a public static multiple tabled DataSet that is populated on the first access and remains in memory while the web application is running. The table being returned is named after the stored procedure that was sent to this method. The foreach iteration builds the command object and populates each parameter from the data in the oTable object and the oParameters that have been passed to this method. A connection is returned through another static member call that maintains a connection pool and determined whether the connection is enlisted in the root transaction or not. The SqlDataAdapter is then returned to the calling method to be executed. In looking at the ParameterCache.dsParameters shown in Listing 7 notice that the class has a static constructor. Listing 7public class ParameterCache{public static DataSet dsParameters;/// <summary>/// Uses a static constructor so that instantiation of/// this class is not needed/// </summary>static ParameterCache() {//instantiate and fill the datasetParametersLookup oParameters = new ParametersLookup();dsParameters = oParameters.FillCache();}This access modifier will cause the constructor to run the first time the object is used before any thing else, even when the object is not instantiated with the new keyword. Inside the constructor an instance of the ParametersLookup class is instantiated and the FillCache method is called to fill the public static dsParameters DataSet. In Listing 8 a new DataSet is created along with an SqlCommand object, and an SqlDataAdapter. A connection is returned from the Connections class that will be explained following this section. The usp_ParameterCache stored procedure is called to populate the new dataset. The new local DataSet now contains all a table for each user defined stored procedure in the associated database specified in the connection. The zeroth table contains the names of the stored procedures and each table in the new DataSet is named after its relevant position in the tables collection of the DataSet. The connection is closed in the finally block and the parameters DataSet is returned to the caller. Listing 8public DataSet FillCache() {DataSet dsParameters = new DataSet();int iTables = 1;SqlCommand oCmd = new SqlCommand();SqlDataAdapter oDa = new SqlDataAdapter(); try { oCmd.CommandText = "usp_ParameterCache";oCmd.CommandType = CommandType.StoredProcedure;oDa.SelectCommand = oCmd;oCmd.Connection = Connections.Connection(false);//grab the tablesoDa.Fill(dsParameters); //name the tables based on the names from the first tableforeach(DataRow oDr in dsParameters.Tables[0].Rows){dsParameters.Tables[iTables].TableName = Convert.ToString(oDr["ProcName"]);iTables++;}}catch(Exception oException) {if (!EventLog.SourceExists(APPLICATION_NAME)){EventLog.CreateEventSource(APPLICATION_NAME, "Application");}EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );throw new Exception(oException.Message,oException);}finally{oCmd.Connection.Close();}return dsParameters;}In examining the Connections class notice Listing 9. Listing 9public static SqlConnection Connection(bool enlist){string Connection = string.Format("User ID=DataLayer;Password=amiinyet;" +"Persist Security Info=False;" + "Initial Catalog=Datalayer;" + "Data Source=coder;Max Pool Size=15;" +"Enlist={0}; Connection Reset=true;" + "Application Name=DataAccess;" + "Pooling=true;Connection Lifetime=0;", enlist); SqlConnection oConn = new SqlConnection(Connection); try{oConn.Open();}catch(Exception oException){if (!EventLog.SourceExists(APPLICATION_NAME))EventLog.CreateEventSource(APPLICATION_NAME,EVENT_LOG);}EventLog.WriteEntry(oException.Source, oException.Message, EventLogEntryType.Error );}return oConn;} The static method Connection takes a single parameter and returns an open SqlConnection that is ether enlisted in a transaction or not depending on the value of the parameter sent in. The new connection is part of a pool. In this instance the pool has a maximum size of 15 connections. The table in Listing 10 has more information on the connection string properties available to the connection object.Listing 10 User ID Logon setup in SQL Server Password Logon password Persist Security Info If a connection has been opened the password is not sent when it is re-used from the pool. Initial Catalog Database Data Source Database Server Max Pool Size Number of connections to allow in the pool for reuse. Care must be taken on the size of the Max Pool Size as this will allow as many connections to the database as listed under loaded conditions. Enlist Add the connection to the current threads transaction context. Connection Reset Determines whether the database connection is reset when being removed from the pool Application Name Identity of the connection visible in perfmon Pooling Draw from the pool if there is a connection available or create a new one and add it to the pool Connection Lifetime If there is no longer immediate need remove the connection. A comparison is done in seconds to determine the immediate need.In conclusion it should be noted that the ParameterCache.dsParameters DataSet will only be populated one time, and that accessing the parameters is instantaneous on subsequent calls. Only the first call is expensive. Also some experimenting should be done with the DDL to determine if it is more suited to run in the context of a library or server as defined in the AssemblyInfo.cs of the project. This library will also register itself in COM Services the first time it is run as defined by the ApplicationName attribute defined there as well. The next piece of the article will be geared toward Web Services and controlling transactions contained in the business Faade layer the other tier are enlisted int. N-Tier Development with Microsoft .NET : Part I Aspect Oriented Programming Pretty cool, makes sense...Im a noob trying to figure out a good architecture...makes the most sense out of about 10 articles ive read...Thank you...
http://www.c-sharpcorner.com/uploadfile/tavanover/ntierdevelopmentwithms111302005054224am/ntierdevelopmentwithms1.aspx
crawl-003
en
refinedweb
Aastra 55I Ip Phone New Page 1 Aastra 55i IP Telephone (A1755-0131-10-05) - Exceptional Features and Value in an Advanced Featured, Expandable IP Telephone The 55i from Aastra offers powerful features and flexibility in a standards based, carrier-grade advanced level expandable IP telephone. Part of the Aastra family of IP telephones, the 55i is ideally suited for moderate to heavy telephone users who require more one touch feature keys and XML based programs. Details Brand: Aastra Part Number: A1755-0131-10-05 UPC: A1755-0131-10-05 [ Report abuse or wrong photo | Share your Aastra 55I Ip Phone photo ] Manual Preview of first few manual pages (at low quality). Check before download. Click to enlarge. Aastra 55I Ip Phone User reviews and opinions Comments posted on are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us. Documents 55 Adjusting the Volume... 22 Status Lights (LEDs).. 22 Call Timer... 23 Softkeys... 23 Programmable Keys.. 23 Line/Call Appearance Keys.. 24 Using a Headset with your Telephone.. 24 Model 536 Expansion Modules (536EM)..25 Installing the 536EM... 26 Troubleshooting Solutions...28 Limited Warranty....31 Introduction Congratulations on your purchase of the Model 55i IP Phone! The 55i communicates over an IP Network, allowing you to place and receive calls in the same manner as a regular business telephone. The 55i is capable of supporting the SIP IP protocol. Phone Features 8 line graphical LCD screen (144 x 75 pixels) with white backlight 12 programmable keys 6 Top keys: Programmable hard keys (up to 6 programmable functions) 6 Bottom keys: Programmable state-based softkeys (up to 20 programmable functions) 4 call appearance lines with LEDs Supports up to 9 call lines Full-duplex speakerphone for handsfree calls Headset support (modular connector) Built-in-two-port, 10/100 Ethernet switch - lets you share a connection with your computer. Inline power support (based on 802.3af standard) which eliminates power adapters. AC power adapter (included) Enhanced busy lamp fields* Set paging* * Availability of feature dependant on your phone system or service provider Requirements The 55i IP Phone requires the following environment: SIP-based IP PBX system or network installed and running with a SIP account created for the 55i phone. Access to a Trivial File Transfer Protocol (TFTP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) server, or Hyper Text Transfer Protocol over Secure Sockets Layer (SSL) (HTTPS). 802.3af Ethernet/Fast Ethernet LAN Category 5/5e straight through cabling Power over Ethernet (PoE) inline power injector (optional accessory necessary only if your network provides no inline power and if you do not use the IP Phones power adapter). Handset Handset Cord Programmable Wall Mount Drilling Template Key Card Telephone Base Desk Legs Power Adapter Ethernet Cable 55i Installation Guide Screws and Anchors for Wall Mounting Optional Accessories (Not Included) PoE (Power over Ethernet) Additional Ethernet Cable (category 5/5e straight Inline Power Injector through cable) Model 536EM Expansion Module Model 560EM Expansion Module. Key Panel 6 programmable keys with LEDs High quality speakerphone HAC handset Message waiting lamp 6 dynamic contextsensitive softkeys Goodbye key Options key Hold key Redial Key Volume control Navigational keys 8-line LCD screen Keypad Speakerphone/headset toggle key Mute key 4 call appearance lines 2 - DIRECTORY 3 - CALLERS LIST) Separate Network Jack To Network Other Network Devices. To Other Network Device Network Jack (if Inline power provided, do not install the power adapter) Ethernet Cables Network Jack Power Outlet PoE Power injector (if Inline power is not provided) To PoE Connecting a Handset or Headset Turn the phone over and locate the headset jack marked f. Insert the headset cord into the jack until it clicks into place. Then route the headset cord through the groove as shown in the above illustration. Headset (Optional). Three stand slot locations for customizing the height of the desk phone. 20.7 deg. Incline Angle 23.3 deg. Incline Angle 26.6 deg. Incline Angle 30.9 deg. Incline Angle Total 4 Viewing Angles Model 55i IP Phone Installation Guide 13... Contrast Level Use the Change softkey to cycle through eight contrast settings, which brighten or darken the display. Model 55i IP Phone Installation Guide 19. Choose this setting if you want to make or receive all calls using a handset or headset. Calls can be switched from the handset to headset by pressing the button on the phone. To switch from the headset to the handset, lift the handset.. Headset Speaker/Headset Headset/Speaker Phone Status This option allows you to: View your network status including your phones IP and MAC address View your firmware version Restart your phone There is also a system administrator level-only option to reset the phone to factory default settings. See your system administrator for details. key. button followed by the Model 55i IP Phone Installation Guide 21. 22 Model 55i IP Phone Installation Guide Call Timer. These keys can also be set up to quickly access features such as Call return (*69) or Voicemail. OFF Rapid Flash Slow Flash Description Indicates idle line or no call activity Indicates ringing on the line.. The 536EM provides 36 additional softkeys on a 55i IP Phone. The softkeys support the following features: BLF Speedial. Handset 12 headset mode 20 headset, making and receiving calls with 24 headset/speaker mode 20 Requirements 1 ring tone, setting 19 Services key 7 Shared Network Connection 9 SIP settings 21 Softkeys 23 speaker mode 20 speaker/headset mode 20 indicator light 22 Inline Power Not Provided 11 Inline Power Provided 10 Install on the Desk 13 Install on the Wall 14 Model 55i IP Phone Installation GuideIndex-1 time setting format 19 setting time 19 setting time server 18 setting time zone 19 tone set, setting 19 troubleshooting solutions 28 user password 21 volume, adjusting 22 Warranty 31 Web UI, using 17 Index-2 If youve read this owners manual and consulted the Troubleshooting section and still have problems, please visit our Web site at, or call 1-800-574-1611 for technical assistance. Aastra Telecom Inc. 2007 41-001158-00 Rev 01 RM6501 CFD-G700CP Review Bamboo DIR-625 Sport Plus Prism DCR-SR62E E 950 Yamaha MR10 BAR638HGA IC-U82 JBL LC1 DMC-FS42 SGH-T819 ASF454 SMS60M02EU T913V TV Link Lvw-5005 CDX-4000RV KF750 S1070 Pink TDM-MP 10 JBL Spot NP-N140-ka01PL Warrior HDR-FX1 Anneaux FAX-phone B95 KDC-M4524G TX-37LZ8F Airjack FB 630P B7330 PRO M40X DMC-FX07 DV7811N 8847MP4 4 0 SPP-A700 Valvefx SGS55E98EU MDE3706AGW PMC-37PRO Dr-mx1 GT-C6625 MDR-DS5100 KS-LH6R Cube-100 Bass Tangent UNO Color 640 X3 2004 CN 4056 WF220ANW PC-150 TXP42G15E SR-S20NTD VL410 ROC1404 5-DRS CE110 SBH170 Travelmate 4650 WV-CU350 F3000 Zire 72 Inspiron 1526 KX-330 ENB3450 WGR101 XL2370-1 NS-45 System GPS 10X L1732S DVP3266K 51 Latigo DEH-P700BT 21PT2443-94R DRW-1000 Lrbc22544ST L194WT-BF CDA-9855R SAT 2 P2470LHD DEH-2800MPB Dimage X GA-MA69gm-s2H CC-500L Lavamat 570 ZD22 6R AQV24FAN SV2042H 1989-1994 MHC-EX66 EOS 1N GSX-1100F Env06 TPM Yamaha MFC
http://www.ps2netdrivers.net/manual/aastra.55i.ip.phone/
crawl-003
en
refinedweb
Kernel Density objective function. More... #include <rrel_kernel_density_obj.h> Kernel Density objective function. Implements the Kernel Density Estimation as presented in the paper "Robust Computer Vision through Kernel Density" by Chen and Meer, 2002. Given residuals ri, i = 1,...,n, the cost function is the estimated density f(x) based on a kernel function K(u) and a bandwidth h as f(x) = -1 / (nh) * sum( K(u) ) where u = (ri-x)/h K(u) = 1.09375 * (1 - u^2)^3 h = [243 * R(K) / 35 / Mu(K)^2 / n]^0.2 * scale The scale can be provided as a prior scale, or computed by MAD or MUSE. Definition at line 27 of file rrel_kernel_density. Definition at line 25 of file rrel_kernel_density_obj.cxx. Destructor. Definition at line 34 of file rrel_kernel_density_obj.h. Calculate the bandwidth. Definition at line 117 of file rrel_kernel_density_obj.cxx. The mode of the density estimate which maximizes the estimated kernel density. The value can be used to shift the estimated parameters. Definition at line 54 of file rrel_kernel_density. Not implemented. Implements rrel_objective. Definition at line 32 of file rrel_kernel_density_obj.cxx. Evaluate the objective function on homoscedastic residuals. prior_scale is needed if the type RREL_KERNEL_PRIOR is used. Implements rrel_objective. Definition at line 41 of file rrel_kernel_density_obj.cxx. x is set to 0;. Definition at line 63 of file rrel_kernel_density_obj.h. Given a kernel and the bandwidth, the estimated density of residuals. Definition at line 169 of file rrel_kernel_density_obj.cxx. Kernel function K(u). Definition at line 184 of file rrel_kernel_density_obj.cxx. Depends on the scale type used. Implements rrel_objective. Definition at line 59 of file rrel_kernel_density_obj.h. Scale estimate. The result is undefined if can_estimate_scale() is false. Reimplemented in rrel_muset_obj, and rrel_lms_obj. Definition at line 76 of file rrel_objective.h. Set the type of the scale. RREL_KERNEL_MAD uses median absolute deviations to estimate the scale. RREL_KERNEL_PRIOR uses the prior scale provided. RREL_KERNEL_MUSE uses MUSE to estimate the scale. Definition at line 54 of file rrel_kernel_density_obj.h. Definition at line 84 of file rrel_kernel_density_obj.h. Definition at line 82 of file rrel_kernel_density_obj.h.
http://public.kitware.com/vxl/doc/release/contrib/rpl/rrel/html/classrrel__kernel__density__obj.html
crawl-003
en
refinedweb
Base class for (robust) estimation problems. More... #include <rrel_estimation_problem.h> Base class for (robust) estimation problems. A fundamental design decision is that the objective function to be minimized is not tied to the class. This allows different robust objective functions to be used with the same data and function model. A disadvantage is that computing the derivative of the objective with respect to the data or the model parameters is not straightforward. So far this hasn't proven to be a problem. A second design decision is that functions are provided for indirect access to the data for residual and weight calculation and to support random sampling operations. Only a few key functions are pure virtual; others will cause an abort (with an error message) unless overridden. This allows errors to be detected (albeit at run-time) if they are used improperly without requiring subclasses to redefine unneeded functions. For example, if a derived class does not implement compute_weights(), then attempting to solve that problem with IRLS will cause an abort at runtime. Definition at line 32 of file rrel_estimation_problem. See the comments for param_dof() and num_samples_to_instantiate() for the meaning of these two parameters. Definition at line 9 of file rrel_estimation_problem.cxx. Constructor. Derived classes using this _must_ call set_dof() and set_num_samples_for_fit(). Definition at line 19 of file rrel_estimation_problem.cxx. Destructor. Definition at line 26 of file rrel_estimation_problem.cxx. Compute the residuals relative to the given parameter vector. The number of residuals must be equal to the value returned by num_samples(). This is a deterministic procedure, in that multiple calls with a given parameter vector must return the same residuals (in the same order). Implemented in rrel_linear_regression, rrel_homography2d_est, rrel_affine_est, rrel_quad_est, rrel_shift2d_est, and rrel_orthogonal_regression. vector from a minimal sample set. The point_indices vector are indices into the data set, and must be filled in with num_samples_to_instantiate() indices. Returns true if and only if the points resulted in a unique parameter vector. Implemented in rrel_linear_regression, rrel_homography2d_est, rrel_affine_est, rrel_quad_est, rrel_shift2d_est, and rrel_orthogonal_regression. The number of samples. Implemented in rrel_linear_regression, rrel_homography2d_est, rrel_orthogonal_regression, rrel_shift2d_est, rrel_affine_est, and rrel_quad_est.. Compute the parameter vector and the normalised covariance matrix. (Multiplying this matrix by the variance in the measurements gives the covariance matrix.) If the weights are provided they are used in the process. Note that if the weights are in fact given, the number of weights (num_wgts) MUST be equal to "num_residuals" returned by the compute_residuals function. If some of the residuals should be ignored as outliers (e.g. as explicitly identified as such by Least Median of Squares), then their weights should just be set to 0. Implemented in rrel_linear_regression, rrel_homography2d_est, rrel_affine_est, rrel_quad_est, rrel_shift2d_est, and rrel_orthogonal_regression. Definition at line 173 of file rrel_estimation_problem.h. Definition at line 177 of file rrel_estimation_problem.h. Definition at line 174 of file rrel_estimation_problem.h. Definition at line 175 of file rrel_estimation_problem.h. Definition at line 178 of file rrel_estimation_problem.h. Definition at line 176 of file rrel_estimation_problem.h.
http://public.kitware.com/vxl/doc/release/contrib/rpl/rrel/html/classrrel__estimation__problem.html
crawl-003
en
refinedweb
Class to maintain data and optimization model for 2d homography estimation. More... #include <rrel_homography2d_est.h> Class to maintain data and optimization model for 2d homography estimation. This class assumes each point has a unique correspondence, even though it may be incorrect. This is the usual assumption used in 2d homography estimation. It probably isn't the best thing to do in practice, though, because correspondences are hard to find without knowing the transformation and robust estimation can pick out the correct correspondences even when they aren't unique. The corresponding data points are provided as a vectors of vgl_homg_point_2d. Corresponding points are assumed to share the same index in the two vectors. Several aspects of this class aren't quite up with the "best" techniques in the literature, although the practical significance of this is known to be quite limited. First, the symmetric transfer error is used in computing residuals. Second, the weighted least-squares fit is just a robust version of Hartley's normalized 8-point algorithm. More sophisticated versions could be developed, but this class was written mostly for demonstration purposes. Definition at line 38 of file rrel_homography. By default, we want a full 8-DOF homography Definition at line 13 of file rrel_homography2d_est.cxx. Constructor from vnl_vectors. By default, we want a full 8-DOF homography Definition at line 40 of file rrel_homography2d_est.cxx. Destructor. Definition at line 57 of file rrel_homography2d_est.cxx. Compute unsigned fit residuals relative to the parameter estimate. Implements rrel_estimation_problem. Definition at line 102 of file rrel_homography 70 of file rrel_homography2d_est.cxx. Convert a homography to a linear parameter list (for estimation). Overloaded for specialized reduced-DOF homographies (i.e. affine) Definition at line 203 of file rrel_homography2d_est.cxx. Definition at line 221 of file rrel_homography2d_est.cxx. Total number of correspondences. Implements rrel_estimation_problem. Definition at line 63 of file rrel_homography. Convert a linear parameter list (from estimation) to a homography. Overloaded for specialized reduced-DOF homographies (i.e. affine) Definition at line 212 of file rrel_homography2d_est.cxx. Print information as a test utility. Definition at line 260 of file rrel_homography2d_est of the correspondence pair has Gaussian error, so the Euclidean distance residual has 4 degrees of freedom. Reimplemented from rrel_estimation_problem. Definition at line 63 of file rrel_homography 144 of file rrel_homography2d_est.cxx. Definition at line 99 of file rrel_homography2d_est.h. Definition at line 101 of file rrel_homography2d_est.h. Definition at line 102 of file rrel_homography2d_est.h. Definition at line 100 of file rrel_homography2d_est.h.
http://public.kitware.com/vxl/doc/release/contrib/rpl/rrel/html/classrrel__homography2d__est.html
crawl-003
en
refinedweb
CONF_MODULES_FREE(3) OpenSSL CONF_MODULES_FREE(3) CONF_modules_free, CONF_modules_load, CONF_modules_unload - OpenSSL configuration cleanup functions #include <openssl/conf.h> void CONF_modules_free(void); void CONF_modules_unload(int all); void CONF_modules_finish(void); CONF_modules_free() closes down and frees up all memory allocated by all configuration modules. CONF_modules_finish() calls each configuration modules fin- ish. Normally applications will only call CONF_modules_free() at application to tidy up any configuration performed. None of the functions return a value. conf(5), OPENSSL_config(3), "CONF_modules_load_file(3), CONF_modules_load_file(3)" CONF_modules_free(), CONF_modules_unload(), and CONF_modules_finish() first appeared.
http://mirbsd.mirsolutions.de/htman/sparc/man3/CONF_modules_unload.htm
crawl-003
en
refinedweb
Algorithm::Pair::Best - deprecated - use Algorithm::Pair::Best2 version 1.036 use Algorithm::Pair::Best; my $pair = Algorithm::Pair::Best->new( ? options ? ); $pair->add( item, ? item, ... ? ); @pairList = $pair->pick( ? $window ? ); Given a set of user-supplied scoring functions that compare all possible pairs of items, Algorithm::Pair::Best attempts to find the 'best' collective pairings of the entire group of items. After creating an Algorithm::Pair::Best->new object, add a list of items (players) to be paired. add connects the new items into a linked list. The total number of items added to the linked list must consist of an even number of items or you'll get an error when you try to pick the pairs. Pairings are determined partially by the original order items were added, but more importantly, items are paired based on scores which are determined by user supplied functions that provide a score for each item in relation to other items (see scoreSubs below). An info hash is attached to each itme to assist the scoring functions. It may be convenient to add access methods to the Algorithm::Pair::Best package from the main namespace (see the scoreSubs option to new below for an example). Algorithm::Pair::Best->pick explores all combinations of items and returns the pairing with the best (highest). On my system it takes about 2 seconds to pair 12 items (6 pairs), and 20 seconds to pair 14 items (with no 'negative scores only' optimization). Trying to completely pair even 30 items would take too long. Fortunately, there is a way to get pretty good results for large numbers, even if they're not perfect. Instead of trying to pair the whole list at once, Algorithm::Pair::Best->pick pairs a series of smaller groups within a 'window' to get good 'local' results. The list created by add should be moderately sorted so that most reasonable candidates will be within window range of each other. The new method accepts a window option to limit the number of pairs in each window. The window option can also be overridden by calling pick with an explicit window argument: $pair->pick($window); See the description of the window and pick below. Algorithm::Pair::Best is deprecated - use Algorithm::Pair::Best2 Algorithm::Pair::Best - Perl module to select pairings (designed for Go tournaments, but can be used for anything, really). A new Algorithm::Pair::Best object becomes the root of a linked list of Algorithm::Pair::Best objects. This root does not represent an item to be paired. It's just a control point for the collection of items to be paired. Items are added to the Algorithm::Pair::Best list with the <add> method (see below). Sets the default number of pairs in the sliding pairing window during a pick. Can also be set by passing a window argument to pick. Here's how a window value of 5 (pairs) works: first pair items 1 through 10. Keep the pairing for the top two items and then pair items 2 through 12. Keep the top pairing and move down to items 4 through 14. Keep sliding the window down until we reach the last 10 items (which are completed in one iteration). In this way, a tournament with 60 players takes less than 1/4 a minute (again, on my system) to pair with very good results. See the gopair script in Games::Go::AGA for a working example. Default: 5 Enable/disable the 'negative scores only" optimization. If any score greater than 0 is found during sortCandidates, Algorithm::Pair::Best turns this flag off. IMPORTANT: If this flag is turned on and a scoreSub can return a number greater than 0, the resultant pairing may not be optimal, even locally. Default: 1 (enabled) Scoring subroutines are called in array order as: foreach my $s (@{$my->scoreSubs}) { $score += $my->$s($candidate); } Scores are accumulated and pairings are attempted. The pairing with the highest cumulative score is kept as the 'best'. Note: Algorithm::Pair::Best works best with scoring subroutines that return only scores less than or equal to 0 - see the sortCandidates method for more details. The scoring subroutines should be symmetric so that: $a->$scoreSub($b) == $b->$scoreSub($a) Example: Note that the scores below are negative (Algorithm::Pair::Best searches for the highest combined score). 'Negative scores only' allows an optimization that is probably worth keeping in mind - it can reduce pairing time by several orders of magnitude (or allow a larger window). See the sortCandidates method for more information. . . . # create an array of scoring subroutines: our @scoreSubs = ( sub { # difference in rating. my ($my, $candidate, $explain) = @_; # the multiplier here is 1, so that makes this the 'normal' factor my $score = -(abs($my->rating - $candidate->rating)); return sprintf "rating:%5.1f", $score if ($explain); return $score; }, sub { # already played? my ($my, $candidate, $explain) = @_; my $already = 0; foreach (@{$my->{info}{played}}) { $already++ if ($_ == $candidate); # we might have played him several times! } # large penalty for each time we've already played my $score = -16 * $already; return sprintf "played:%3.0f", $score if ($explain); return $score; }, ); # the 'difference in rating' scoring subroutine above needs a 'rating' # accessor method in the Algorithm::Pair::Best namespace: { package Algorithm::Pair::Best; sub rating { # add method to access ratings (used in scoreSubs above) my $my = shift; $my->{info}{rating} = shift if (@_); return $my->{info}{rating}; } } # back to the main namespace . . . In the above example, note that there is an extra optional $explain argument. Algorithm::Pair::Best never sets that argument, but user code can include: my @reasons; foreach my $sSub (@scoreSubs) { push(@reasons, $p1->$sSub($p2, 1)); # explain scoring } printf "%8s vs %-8s %s\n", $id1, $id2, join(', ', @reasons); to explain how $p1 scores when paired with $p2. Default: ref to empty array Accessor methods can read and write the following items: Accessor methods set the appropriate variable if called with a parameter, and return the current (or new) value. Add an item (or several items) to be paired. The item(s) can be any scalar, but it's most useful if it is a reference to a hash that contains some kind of ID and information (like rating and previous opponents) that can be used to score this item relative to the other items. If a single item is added, the return value is a reference to the Algorithm::Pair::Best object created for the item (regardless of calling context). If multiple items are added, the return value is the list of created Algorithm::Pair::Best objects in array context, and a reference to the list in scalar context. Note: the returned pair_items list is not very useful since they have not yet been paired. Returns the score (as calculated by calling the list of user-supplied scoreSubs) of the current pairing item relative to the candidate pairing item. The score is calculated only once, and the cached value is returned thereafter. If new_score is defined, the cached candidate and item scores are set to new_score. Sort each candidate list for each item. This method calls score (above) which caches the score for each candidate in each item. Normally this routine does not need to be called as the pick method calls sortCandidate before it starts picking. However, if you would like to modify candidate scores based on the sorting itself (for example, in the early rounds of a tournament, you may wish to avoid pairing the best matches against each other), you can call sortCandidates, and then make scoring adjustments (use the citems method to get a reference to the sorted list of candidates, then use $item->score($candidate, $new_score) to change the score). After changing the score cache, calling the pick method calls sortCandidates once more which will re-sort based on the new scores cache. Note: during sortCandidates, the scores are checked for non-negative values. If only 0 or negative values are used, the pick method can optimize by skipping branches that already score below the current best pairing. Any scores greater than 0 disable the 'negative scores only' (negOnly) optimization. Returns the best pairing found using the sliding window technique (calling wpick) as discussed in DESCRIPTION above. The size of the window is $windows pairs (2*$windows items). If no window argument is passed, the default window selected in the new call is used. pick returns the list (or a reference to the list in scalar context) of Algorithm::Pair::Best objects in pairing order: item[0] is paired to item[1], item[2] to item[3], etc. pick performs a sanity check on the pairs list, checking that no item is paired twice, and that all items are paired. Each time a pair is finalized in the pick routine above, it checks to see if a subroutine called progress has been defined. If so, pick calls $pair->progress($item0, $item1) where $item0 and $item1 are the most recently added pair of items. progress is not defined in the Algorithm::Pair::Best package. It is meant to be provided by the caller. For example, to print a message as each pair is finalized: . . . { package Algorithm::Pair::Best; sub progress { my ($my, $item0, $item1) = @_; # assuming you have provided an 'id' method that returns a string: print $item0->id, " paired with ", $item1->id, "\n"; } } # back to main:: namespace . . . Normally wpick is only called by the pick method. wpick returns a reference to a list of the best pairing of $window pairs (or 2*$window items) starting from the first unpaired item in the list (as determined by add order). The returned list is in pairing order as described in pick. If there are fewer than 2*$window items remaining to be paired, prints an error and returns the best pairing for the remaining items. If an odd number of items remain, prints an error and returns the best pairing excluding the last item. Note that while the pairing starts from the first item in the add list, the returned pairs list may contain items from outside the first 2*$window items in the add list. This is because each item has its own ordered list of preferred pairs. However, the first unpaired item in the add list will be the first item in the returned list. Similarly, in the 'odd number of items remaining' situation, the discarded item is not neccessarily the last item in the add list. scoreFunc is not defined in the Algorithm::Pair::Best package, but the pick method checks to see if the caller has defined a subroutine by that name. If defined, it is called each time a candidate score is added to the currScore total for a trial pairing. Normally, Algorithm::Pair::Best simply adds the scores and tries for the highest total score. Some pairings may work better with a different total score, for example the sum of the squares of the scores (to reduce the ability of one bad pairing to compensate for a group of good pairings). scoreFunc provides a hook for this modification. If defined, scoreFunc is called as: $score = $item->scoreFunc($candidate, $score); where $item is the current Algorithm::Pair::Best item being paired, $candidate is the current candidate item under consideration, and $score is $candidate's unaltered score (wrt $item). IMPORTANT: Remember to retain negative scores (or disable the negOnly optimization. Example use of scoreFunc: . . . { package Algorithm::Pair::Best; sub scoreFunc { my ($my, $candidate, $score) = @_; # we want to minimize the squares of the scores: return -($score * $score); } } # back to main:: namespace . . . The gopair script from the Games::Go::GoPair package uses Algorithm::Pair::Best to run pairings for a go tournament Reid Augustin, <reid@HelloSix.com> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.5 or, at your option, any later version of Perl 5 you may have available. Reid Augustin <reid@hellosix.com> This software is copyright (c) 2011 by Reid Augustin. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/dist/Algorithm-Pair-Best/lib/Algorithm/Pair/Best.pm
crawl-003
en
refinedweb
#include <itkWeakPointer.h> Inheritance diagram for itk::WeakPointer< TObjectType >: 43 of file itkWeakPointer.h. Extract information from template parameter. Definition at line 47 of file itkWeakPointer.h. Constructor. Definition at line 50 of file itkWeakPointer.h. Copy constructor. Definition at line 54 of file itkWeakPointer.h. Constructor to pointer p. Definition at line 57 of file itkWeakPointer.h. Destructor. Definition at line 60 of file itkWeakPointer.h. Access function to pointer. Definition at line 85 of file itkWeakPointer.h. Referenced by itk::WeakPointer< itk::ProcessObject >::operator=(). Return pointer to object. Definition at line 68 of file itkWeakPointer.h. Definition at line 78 of file itkWeakPointer.h. Overload operator ->. Definition at line 64 of file itkWeakPointer.h. Comparison of pointers. Less than comparison. Definition at line 89 of file itkWeakPointer.h. Comparison of pointers. Less than or equal to comparison. Definition at line 97 of file itkWeakPointer.h. Overload operator assignment. Definition at line 109 of file itkWeakPointer.h. Overload operator assignment. Definition at line 105 of file itkWeakPointer.h. Template comparison operators. Definition at line 73 of file itkWeakPointer.h. Comparison of pointers. Greater than comparison. Definition at line 93 of file itkWeakPointer.h. Comparison of pointers. Greater than or equal to comparison. Definition at line 101 of file itkWeakPointer.h. Function to print object pointed to. Definition at line 116 of file itkWeakPointer.h.
http://www.itk.org/Doxygen36/html/classitk_1_1WeakPointer.html
crawl-003
en
refinedweb
#include <MoochoPack_LineSearch2ndOrderCorrect_StepSetOptions.hpp> Inheritance diagram for MoochoPack::LineSearch2ndOrderCorrect_StepSetOptions: The default options group name is LineSearch2ndOrderCorrect. The options group is: {verbatim} options_group LineSearch2ndOrderCorrect { newton_olevel = PRINT_NOTHING; constr_norm_threshold = 1e-3; constr_incr_ratio = 5.0; after_k_iter = 3; forced_constr_reduction = LESS_X_D; forced_reduct_ratio = 1.0; max_step_ratio = 0.7; max_newton_iter = 3; } {verbatim} {description} [newton_olevel] This is the output level for the internal newton iterations that compute the second order correction. The value of this is usually determined by default to be compatable with the output level for the whole algorithm. This option does not effect the rest of the output in the LinSearch2ndOrderCorrect step. The output level can be set specifically to the values of: {description} [PRINT_USE_DEFAULT] Let it be determined by overall algorithm print level. [PRINT_NOTHING] No output about the newton iterations is performed. [PRINT_SUMMARY_INFO] A compact summary table is created. [PRINT_STEPS] Print more detailed info about the steps. [PRINT_VECTORS] Also print out calculated vectors. Careful, this could produced a lot of output for large problems. {description} [constr_norm_threshold] See after_k_iter. [after_k_iter] When ||c_k||inf < |constr_norm_threshold| and k >= |after_k_iter| then the second order correction will be considered if full steps are not taken. Having a dual test allows an initail nonoptimal ||c(x)|| = 0 to force a second order correction too early. [forced_constr_reduction] Determines how much the constraint error should be reduced for each SQP iteration. The legal values are: {description} [LESS_X_D] Just find a point where ||c(x_k+d_k+w)|| < ||c(x_k+d_k)||. The algorithm should be able to compute this point in one iteration unless it can't compute a descent direction for ||c(x)||. [LESS_X] Find a point where ||c(x_k+d_k+w)|| < ||c(x_k)||. This is a much more difficult test requirement. If the algorithm can not find an acceptable point in the alotted iterations (see max_newton_iter) then a point ||c(x_k+d_k+w)|| < ||c(x_k+d_k)|| will be accepted if possible. {description} [max_step_ratio] This limits the size of the correction term to be: w = min( 1, |max_step_ratio|/(||w||inf/||d_k||) ) * w. This option keeps the x_k+d_k+w from getting too far away from x_k+d_k in case bad corrections are computed. A value less than one is probably best. [max_newton_iter] Limits the number of newton iterations performed. It may be best to keep this a small number. {description} Definition at line 98 of file MoochoPack_LineSearch2ndOrderCorrect_StepSetOptions.hpp. Overridden from SetOptionsFromStreamNode. Implements OptionsFromStreamPack::SetOptionsFromStreamNode.
http://trilinos.sandia.gov/packages/docs/r10.0/packages/moocho/src/MoochoPack/doc/html/classMoochoPack_1_1LineSearch2ndOrderCorrect__StepSetOptions.html
crawl-003
en
refinedweb
#include <MoochoPack_InitFinDiffReducedHessian_Step.hpp> Inheritance diagram for MoochoPack::InitFinDiffReducedHessian_Step: 69 of file MoochoPack_InitFinDiffReducedHessian_Step.hpp. Definition at line 78 of file MoochoPack_InitFinDiffReducedHessian_Step.hpp. Definition at line 56. Definition at line 68 of file MoochoPack_InitFinDiffReducedHessian_Step.cpp. Definition at line 278 of file MoochoPack_InitFinDiffReducedHessian_Step.cpp.
http://trilinos.sandia.gov/packages/docs/r10.0/packages/moocho/src/MoochoPack/doc/html/classMoochoPack_1_1InitFinDiffReducedHessian__Step.html
crawl-003
en
refinedweb
Problem code: NUKES There are K nuclear reactor chambers labelled from 0 to K-1. Particles are bombarded onto chamber 0. The particles keep collecting in the chamber 0. However if at any time, there are more than N particles in a chamber, a reaction will cause 1 particle to move to the immediate next chamber(if current chamber is 0, then to chamber number 1), and all the particles in the current chamber will be be destroyed and same continues till no chamber has number of particles greater than N. Given K,N and the total number of particles bombarded (A), find the final distribution of particles in the K chambers. Particles are bombarded one at a time. After one particle is bombarded, the set of reactions, as described, take place. After all reactions are over, the next particle is bombarded. If a particle is going out from the last chamber, it has nowhere to go and is lost. The input will consist of one line containing three numbers A,N and K separated by spaces. A will be between 0 and 1000000000 inclusive. N will be between 0 and 100 inclusive. K will be between 1 and 100 inclusive. All chambers start off with zero particles initially. Consists of K numbers on one line followed by a newline. The first number is the number of particles in chamber 0, the second number is the number of particles in chamber 1 and so on. Input: 3 1 3 Output: 1 1 0 it doesn't take java file. correctly. i mean. i have tested this program but Judge fails for all testcases. im not sure whether it is giving inputs correctly.. please provide some examples for submitting a code. i.e. give a sample java file which takes input and prints values. You will find that information here. Hi, no matter what optimization i do for my java code, i am not able to complete 2 test cases of yours withing the time limit. can some one help me with this. -karthik What is the complexity of your solution? I have a O(log(A)) solution but still it is giving time limit exceeded. i cant believe it. oh I was assuming multiple input line. now its fine. @Directi Admin: my algorithm works in 0(n), in the worst case for my implementation. still i am not able to write within time for one of the test cases. can i get the test data for that test case. Thx. -karthik An O(n) algorithm won't work. You need a better complexity for it to pass. hi, please check my solutions in java and c . both have same algorithm but yet java solution takes 1.95 s and c takes 0 s. are there any plans to remove this biasing? basically are u calling the .class file from some script and measure how much script takes or u calling the static function from already running class and measuring how much the function took? more precisely does the time include starting time of jvm or it is the more time taken by jvm for executing same instructions? I feel there is something seriously wrong........Even with the maximum inputs my code works perfectly fast....still i am getting time limit exceeded error.........any other way to measure the time.......plz suggest One simple way would be to check the complexity and then check if it is such that the implementation would work in time. You can try executing the code locally for the maximum sized test case and see how much time it takes. i have tried it on my local pc with maximum input say 1000000000 100 100 on eclipse.....and i feel it hardly takes a second or so.......... And how can i check the complexity of my solution............ The complexity of your code is of a magnitude higher than O(A) and A can be as large as 10^9. You need to reduce the complexity. I have tested my code with various test cases. It is showing correct results. But it is showing wrong answer at your end with all your test cases. Please check whether I am using Input and output correctly. Should I display - Input : ...... next line Output : .... or just the Input values and in the next line just the output values..? You don't need to display the input again. Just the output as specified in the problem statement. It seems you already have got this problem accepted. Submissions for this problem will be made available after some days. Why can't we submit answers to this problem ? Submissions will be enabled soon. why has the problem been removed ? HI, I was working on this problem: I am not able to submit my solution. Is the Nuclear Reactor problem removed? The problem will be up soon. Sorry for the inconvenience Hi I am getting a wrong answer for the case 10^9. what could be the possible problems? I think that the code I have implemented is correct. The ID of my code is 190634. I am getting a SIGFPE (Floating Point Exception) for the case 10^9. Also, the answers for all the other test cases were correct. How to deal with this?? Also, when I submitted the solution, it showed the results for different values of N, which according to me, should be A. Regarding SIGFPE, please don't bother anyone. Looks like C was assigning 0 to the variables that went outside the range of 'int'. Division by that variable was giving the Floating Point Exception. @ADMIN PLZ HELP I GET SIGFPE for A = 10^9. BUT WHY I CAN'T SEE. Just have a look, i've tried a lot :| @ Admin I am able to pass all the test cases except the third one. I have tried but I am unable to make out why, Please help. :) oh! i got it!! got mine ass blown off in such a easy question. tips for other people: 1)try to use log function. 2)take care of special case n=0 @ admin My code runs successfully on my system....bt gives wrong answer whn i submit it here. Am i too bad in algorithms or am I correct in thinking that there is an O(K) solution to this problem? ADMIN, m get this error many times on submitting wht does this mean :( internal error occurred in the system. ADMIN here is my code m getting :( internal error occurred in the system on submitting . any suggestion what might be th reason .. ya admin me 2 getting the same result :( internal error occurred in the system on submitting . plzz help When I had executed the code on my system it took 0.1 sec but after submitting here I got "Time Limited Exceeded", 2.85 sec when A = 10^9 .... There is more than one possible test case with A = 10^9. Have you tried N=1 and K=100? //rishabh jain #include<iostream> #include<cstdio> using namespace std; int main() { int a,n,k; cin>>a>>n>>k; for(int i=0;i<k;i++) cout<<a%(n+1); a=a/(n+1); } system("pause"); return 0; Your code gives the wrong answer for the sample input. Read the FAQ if you don't know how to test your code. @stephen : i'v tried a lot. I can't point out any error in my logic.Codechef still says my solution is incorrect :/ Please help. #include<stdio.h> long int a,pow; int n,k,i=0,q=0,g,b,t; scanf("%ld",&a); scanf("%d",&n); scanf("%d",&k); while(i<k&&n!=0) { q=0; pow=(n+1); while(q<(i-1)){ pow=(n+1)*pow; if(pow>a){break;} q++; g=(a/pow); b=(n+1); t=g%b; printf("%d ",t); i++; i=0; if(n==0){while(i<k){printf("0 ")
http://www.codechef.com/problems/NUKES
crawl-003
en
refinedweb
import random import time def chase(): # Create an amphitheater in which our turtles will fight to the death amphitheater = makeWorld() # Create two turtles: a predator and its prey predator = makeTurtle(amphitheater) prey = makeTurtle(amphitheater) # Position the turtles in their starting positions. # The predator is aiming at the prey; the prey is pointing South-East. penUp(predator) penUp(prey) x1 = random.choice(range(0, amphitheater.getWidth()/4)) y1 = random.choice(range(0, amphitheater.getHeight()/4)) x2 = amphitheater.getWidth()/2 y2 = amphitheater.getHeight()/2 moveTo(predator, x1, y1) moveTo(prey, x2, y2) turnToFace(predator, x2, y2) turnToFace(prey, amphitheater.getWidth(), amphitheater.getHeight()) # Have the predator chase the prey and the prey run away until the # predator has closed to within tooth distance. while distanceBetween(predator, prey) > 5: # The prey panics and keeps changing direction. evasiveAngle = random.choice(range(-90, 90)) turn(prey, evasiveAngle) forward(prey, 40) # The predator aims at where the prey is, not where it is moving to. # But if it is fast enough, it won't keep going round in a circle. turnToFace(predator, prey.getXPos(), prey.getYPos()) # We need to stop the predator overshooting. # See what happens if you take this out. closingDistance = int(min(distanceBetween(predator, prey), 50)) # Close in on the victim forward(predator, closingDistance) # PaREMOVEDe half a second so that we can watch what happens at leisure. time.sleep(0.5) # Compute the Euclidean distance between turtles. Very like the distance between # colors, except that they have three dimensions (RGB). Turtles live in a # two-dimensional plane. def distanceBetween(turtle1, turtle2): x1 = turtle1.getXPos() x2 = turtle2.getXPos() y1 = turtle1.getYPos() y2 = turtle2.getYPos() dist = math.sqrt((x1-x2)**2 + (y1-y2)**2) return dist
http://coweb.cc.gatech.edu/cs1315/5233
crawl-003
en
refinedweb
Apache::PAR::tutorial - Information on getting Apache::PAR up and running. Apache:. For the package developer, Apache::PAR allows for easy package management, which frees the author from the task of creating a full Perl package. Apache::PAR allows the package developer to set the required Apache configuration directly in a package which greatly simplifies the install process for the end user and gives the the developer the ability to assign URL's which remain the same on all systems that the package is installed on. It is possible to decompress the contents of the PAR file during startup, which allows the use of code which relies on outside content (templating systems, etc) Once Apache::PAR is installed, it can be configured in an Apache configuration file with as little as two lines. Once setup, to add a new .par package to the system a user only has to place the package in the directory specified in the Apache configuration and restart Apache. All other configuration needs are provided by the module itself. Apache::PAR is installed in a manner similar to other CPAN modules. Either use CPAN to install, or download the package and install by hand. To install from CPAN, simply start the CPAN shell and execute an install command. For instance: perl -MCPAN -eshell; install Apache::PAR Select [y]es to install any required dependencies. NOTE: If you are installing Apache::PAR using CPAN as root you may need to force the install (force install Apache::PAR.) This is because the tests rely on loading .par files from a test directory, which may fail due to permission problems. Unless compiled to do so, Apache will not run as the root user, however, the modules are tested from the .cpan directory under root's home directory. This will hopefully be addressed in a future version of Apache::PAR. Also, you may want to add your Apache bin/ directory to your path if it isn't already set. This allows Apache::Test to choose which Apache to use when testing. Download the latest version of Apache::PAR from CPAN, as well as any dependencies which you do not already have installed. Below is a list of modules which are required by Apache::PAR. For some of these modules, a compiler may be required if building from source (although, Apache::PAR itself is written in pure perl.) NOTE: It is possible to install all of these on Win32 systems without a compiler. Most of these modules are avalable through ppm, and PAR itself has it's own system for downloading binary code for platforms which do not have a compiler (as of this writing, PAR .74 is not available on ppm, but a normal install should work) To install a Perl module manually, use the following steps: unpack, create the makefile, make the package, optionally test the package, and install the package: If you want to install Apache::PAR in a directory other than the default, use the PREFIX option to step 3. above: perl Makefile.PL PREFIX=/path/to/install This is useful if you are installing Apache::PAR as a non-root user. If you do this, however, you may need to add the path to find Apache::PAR to a <PERL> section in your Apache configuration. See the mod_perl documentation for more information. NOTE: Similarly to the CPAN install instructions above, if you are installing this package as root, or using mod_perl 2.x, you may run into some problems with permissions when running step 5. above (make test.) In order to run the tests as root, you will have to build Apache::PAR from a directory that is readable by the Apache user, normally the "nobody" user, that you wish to test with. NOTE: If you have both mod_perl 1.x and 2.x installed, you may have to setup which one to test against before installing. See Apache::Test for more information. If you have installation problems which you cannot resolve, see the CONTACT section to notify the module author of the problem. Once Apache::PAR has been installed, it needs to be configured in order to tell it which packages should be included when starting Apache. A short example follows: PerlSetVar PARInclude /path/to/dir PerlAddVar PARInclude /path/to/another/dir ... PerlAddVar PARInclude /path/to/a/file.par PerlAddVar PARInclude /path/to/another/file.par ... PerlAddVar PARTempDir /path/to/temp/dir ... PerlModule Apache::PAR PerlInitHandler Apache::PAR PLATFORM NOTE: On Win32 platforms, the line reading PerlModule::Apache::PAR should be: <PERL> use Apache::PAR; </PERL> IMPORTANT: PerlSetVar lines related to the configuration of the Apache::PAR module *must* appear above the PerlModule line for Apache::PAR. This is due to the order in which Apache parses the configuration file and what information is available to Apache::PAR when it is loaded. IMPORTANT: When using mod_perl 2.x, if you are using both mod_perl 1.x and 2.x on the same machine, you may need to add a line similar to: PerlModule Apache2 This line should be added before any Apache::PAR lines in the configuration. NOTE: Alternatively, Apache::PAR can be configured completely in a startup.pl or PERL section by using a configuration like the following: use Apache::PAR qw( /path/to/dir /path/to/another/dir /path/to/a/par/file.par /path/to/another/par/file.par ); The files and directories listed in the import list for Apache::PAR will be included in the same fashion as PAR archives added with PARInclude. However, not mix PerlModule Apache::PAR with use Apache::PAR, only one of these should exist in a given configuration. If you need to do something like this, you can use import Apache::PAR qw(...); after a PerlModule Apache::PAR entry (or after a previous use Apache::PAR line). Each configuration option is described below in more detail: PARInclude: PARInclude options are used to specify either PAR archives to be loaded, or directories to be scanned for PAR archives. For either directories or files, the option can include either a full or relative path (without a leading /). If a relative path is specified, Apache::PAR will attempt to find files based on Apache's server_root (normally the base directory in which Apache is installed.) For instance, if Apache is installed in /usr/local/apache, then including "PerlSetVar PARInclude par/" in your configuration would attempt to load .par files from /usr/local/apache/par PARDir *DEPRECATED*: The PARDir directive has been depracated and for now works the same way as PARInclude. This directive may be removed in a future version of Apache::PAR. PARFile *DEPRECATED*: The PARFile directive has been depracated and for now works the same way as PARInclude. This directive may be removed in a future version of Apache::PAR. A PAR archive will be rejected if it is not readable by the user which Apache is started as or if the file is not in zip file format. Otherwise, Apache::PAR will then open each .par archive found and attempt to load any configuration found within. Look in your Apache error_log for errors related to loading .par archives. PerlInitHandler Apache::PAR: This directive tells Apache that Apache::PAR should be checked during requests to see if any content has been changed inside PAR archives. If any content has changed, the modules and content will be reloaded automatically. This is probably a good setting to use in development, but you may want to consider skipping this in a production environment due to the overhead of checking the modified times of packages. PARTempDir: This directive is used to specifiy the location of the directory which will be used when unpacking any PAR content (for archives which require this functionality.) If PARTempDir is set to NONE, archives which require unpacking will not load during startup, and a warning will be generated. PARTempDir defaults to the platform specific temp directory if available. At a minimum, creating .par packages is as simple as making a web.conf file which has information about how to access the contents of your package and creating a zip file with this file as well as the content and programs. The web.conf file contains the Apache configuration instructions necessary to use the content included in your package. This file should be placed in the main directory of the .par file and is in Apache configuration file format. The only addition to this format by Apache::PAR is the ##PARFILE## meta directive, which is used to specify the location of the current .par file (since this information is not known at package creation time.) Below is a sample web.conf file: Alias /myapp/cgi-perl/ ##PARFILE##/ PerlModule Apache::PAR::Registry <Location /myapp/cgi-perl> Options +ExecCGI SetHandler perl-script PerlHandler Apache::PAR::Registry </Location> This web.conf file creates a /myapp/cgi-perl location on the web server to serve Registry scripts from inside your .par archive. Similar sections can be added for other types of content including static content, Registry scripts, PerlRun scripts, or mod_perl modules. Another section below shows the configuration necessary to load a sample mod_perl module: PerlModule MyApp::TestMod Alias /myapp/mod/ ##PARFILE##/ <Location /myapp/mod> SetHandler perl-script PerlHandler TestMod </Location> This configuration section would load a mod_perl module named MyApp::TestMod and make it available under the url /myapp/mod. For other types of configuration, see the documentation for the particular content type you wish to add. Another special variable, ##UNPACKDIR## allows the managing of uncompressing content during Apache startup. If ##UNPACKDIR## is specified, it does two things. 1) Tells Apache::PAR that it is expected to decompress the content and 2) defines the location to directives in the web.conf where this content was unpacked to. For instance, to set a template directory to be unpacked, and create a environment variable pointing to this location for content, you could use something like the following: PerlSetEnv TestModTemplateDir ##UNPACKDIR##/templates With this directive, a PAR archive can also be treated as any other Apache content, by using ##UNPACKDIR## in place of ##PARFILE##, and using the normal Apache or mod_perl modules for handling content. For instance: Alias /myapp/cgi-perl/ ##UNPACKDIR##/cgi-perl/ PerlModule Apache::Registry <Location /myapp/cgi-perl> Options +ExecCGI SetHandler perl-script PerlHandler Apache::Registry </Location> To add content to a package simply create your scripts, modules and static content in the appropriate directory. Below are the default directories for each content type. It is probably a good idea to use the directories listed below, as the selection of the directory each type of content will be read from is configured by the end user of the package. There are a couple items of note in the list above. Both Registry and PerlRun scripts are loaded from a scripts/ directory by default. Normally, a package would not contain both Registry and PerlRun scripts. If a package does need both, using PerlRun should work for both Registry and PerlRun. Also, which directory a module is installed into should be based on the same criteria as for a normal package (i.e. it should go into / or lib/ unless it contains XS code, in which case it should be installed in the appropriate directory.) NOTE: If you wish your package to work under both mod_perl 1.x and 2.x environments, please see for more information on porting modules to mod_perl 2.x. Once the web.conf file has been created and content has been created and moved to the appropriate directory, the final package can be created. As noted previously, .par archives use the zip file format so that any program which creates zip files should be sufficient for creating .par archives. PAR archives do not currently support encrypted zip files, however. On a un*x system, a command similar to the following should be sufficient to create a par archive: zip -r myapp.par * After packaging (or during installing), the .par archive must be made executable if script files (Registry or PerlRun) files will be used out of the archive.
http://search.cpan.org/dist/Apache2-PAR/PAR/tutorial.pod
crawl-003
en
refinedweb
As I mentioned we are learning the new enhancements in ASP.NET 4 and ASP.NET 4 Web Forms from scratch at the Sarasota Web Developer Group. At the first meeting we talked about many of the new enhancement in ASP.NET 4 Web Forms, including the new enhancements in ASP.NET 4 Web Forms Routing. GetRouteUrl In my last post I neglected to mention the Control.GetRouteUrl Method in System.Web.UI as a way to get routes from the route name and various route parameters. Using the following route from the previous post: public class Global : System.Web.HttpApplication { void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute( "Contact_Details", // Route Name "Contacts/Details/{id}", // Url and Parameters "~/Contacts/Details.aspx" // Page Handling Request ); } } One can get the route for a particular contact inside a Page_Load Event, for example, by calling GetRouteUrl on the Page as follows: protected void Page_Load(object sender, EventArgs e) { var route = GetRouteUrl("Contact_Details", new { id = 1 }); } The value of route in this case will be “/Contact/Details/1″. GetRouteUrl when DataBinding in Grid The GetRouteUrl Method is particularly useful in a Grid when you want to display a list of contacts with a link to the details of each. Shown below is just a snippet of the Grid source code where I am using GetRouteUrl with the proper Route Name and Parameters within the NavigateUrl Property of the HyperLink: <asp:TemplateField> <ItemTemplate> <asp:HyperLink </asp:HyperLink> </ItemTemplate> </asp:TemplateField> Conclusion I’ll be posting about other topics presented at the group. If you live in the area, please come out and attend the second meeting focusing on ASP.NET MVC, DynamicData, Castle ActiveRecord, and more. I’ll also be presenting What’s New in ASP.NET 4 at the Orlando Code Camp.
http://codebetter.com/davidhayden/2010/03/07/getrouteurl-in-asp-net-4-web-forms-routing/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+CodeBetter+%28CodeBetter.Com%29
crawl-003
en
refinedweb
used by default for GUI.Toggle controls. using UnityEngine; public class Example : MonoBehaviour { // Modifies only the toggle style of the current GUISkin GUIStyle style; public bool val = false; void OnGUI() { GUI.skin.toggle = style; val = GUILayout.Toggle(val, "A Toggle control"); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/GUISkin-toggle.html
CC-MAIN-2019-51
en
refinedweb
In my previous article, you learned about how to create wizards and some simple objects such as DTE, Solutions, Project and Project Item. These objects help us to customize our Visual Studio working environment. In my previous article, Basics of Extending Your Working Environment in Visual Studio,Before we start with development lets get familiar with classes we will be using.We will be using some classes from System.Reflection namespace and many classes from EnvDte namespace. Lets start with EnvDTE namespaceObject model in EnvDTE namespace is called as Visual Studio .net Automation Object Model. DTE stands for Development Tools Extensibility.DTE ObjectEnvDDObject model at this level looks simple and easy to understand. From now onwards it starts getting complicated.FileCodeModelEach likeDevelop Sample to read object model in a projectNow that we are aware of various objects, lets start with developing a small sample.We are building on top of code from the previous article. In previous article we studied Solution object, ProjectItems collection, ProjectItem object.Lets add a tree view control to UIFor each project item selected we will add Namespaces, Classes and Interfaces in the same. Add following code in lstProjectItems_SelectedIndexChanged event to WizardSampleUI.cs file.private void lstProjectItems_SelectedIndexChanged(object sender,System.EventArgs e){trvCodeElements.Nodes.Clear();foreach(Project prj in this.dte.Solution.Projects){if(prj.Name == lstProjects.SelectedItem.ToString()){foreach(ProjectItem prjItem in prj.ProjectItems){if(lstProjectItems.SelectedItem.ToString() == prjItem.Name){LoadProjectItemdataInProjectItem(prjItem);break;}}}}}The function LoadProjectItemdataInProjectItem will add all the items in the prjItem to treeview.Add following function to WizardSampleUI.cs file./// <summary>// Displays the project item data for selected project./// </summary>private void LoadProjectItemdataInProjectItem(ProjectItem prjItem){switch(prjItem.Kind){case EnvDTE.Constants.vsProjectItemKindMisc:lblItemKindText.</param>private void loadProjectItemdata(ProjectItem prjToLoad){prjToLoad.Open("{7651A701-06E5-11D1-8EBD-00A0C90F26EA}"); //Viewkindpertaining to type of view to use.TreeNode rootNd = trvCodeElements.Nodes.Add(prjToLoad.Name);TreeNode nsNodes = trvCodeElements.Nodes.Add("NameSpaces");foreach(CodeElement cdElement in prjToLoad.FileCodeModel.CodeElements){if(cdElement.Kind == vsCMElement.vsCMElementNamespace){TreeNode ndNameSpace = nsNodes.Nodes.Add(cdElement.FullName);ndNameSpace.Tag = cdElement; //Tag will later used to navigate to project item.TreeNode ndClassNodes = ndNameSpace.Nodes.Add("Classes");TreeNode ndInterfacesNodes = ndNameSpace.Nodes.Add("Interfaces");foreach(CodeElement cdClassElmnt in ((CodeNamespace)cdElement).Members){if(cdClassElmnt.Kind == vsCMElement.vsCMElementClass){TreeNode classNode = ndClassNodes.Nodes.Add(cdClassElmnt.Name);classNode.Tag = cdClassElmnt;//Tag will later used to navigate to project item.loadClassMembers(classNode,(CodeClass)cdClassElmnt);continue;}if(cdClassElmnt.Kind == vsCMElement.vsCMElementInterface){TreeNode interfaceNode = ndInterfacesNodes.Nodes.Add(cdClassElmnt.Name);interfaceNode.Tag = cdClassElmnt;//Tag will later used to navigate to project item.loadInterfaceMembers(interfaceNode,(CodeInterface)cdClassElmnt);continue;}//Same way you can add code for Delegates, Enums and Structs.}}}}This is an important piece of code. There are couple of important methods and properties used in this function GUID View Type {7651A701-06E5-11D1-8EBD-00A0C90F26EA} Code View {7651A700-06E5-11D1-8EBD-00A0C90F26EA} Debugger View {7651A702-06E5-11D1-8EBD-00A0C90F26EA} Designer View {00000000-0000-0000-0000-000000000000} Default view for the item {7651A703-06E5-11D1-8EBD-00A0C90F26EA} Text view Also check the FileCodeModel property used on prjToLoad. We are using CodeElements collection on this object. The next step would be to generate code. Happy Wizarding till then!!!! View All
https://www.c-sharpcorner.com/article/extending-your-working-environment-in-visual-studio/
CC-MAIN-2019-51
en
refinedweb
Migration Guide In the update to ember-cli-page-object v1.x, we’ve defined more intuitive behavior and moved to a more polished and mature API. This sounds great, but it also comes with a cost: you need to migrate your test suite. This page includes a list of breaking changes and API enhancements to help you upgrade as quickly and painlessly as possible. - Change build()calls to create()calls - Components are now just plain objects .customHelperis deprecated - Collections are now 0-based indexoption renamed to atand is 0-based - Remove parentheses when getting a value for a query or predicate - Scope and resetScope - The multipleoption .visitable() .clickOnText() Change build() calls to create() calls This is very simple: const page = PageObject.build({ // ... }); Should be changed to: const page = PageObject.create({ // ... }); Components are now just plain objects In v0.x we deprecated the component function. In v1.0 we removed it completely in favor of using plain JS objects. const page = PageObject.create({ // ... modal: component({ // modal component definition }), // ... }); Should be changed to: const page = PageObject.create({ // ... modal: { // modal component definition }, // ... }); .customHelper .customHelper is now deprecated. Use Ceibo descriptors instead. ( Ceibo is a small library for parsing trees. You can check it out here.) With the old v0.x syntax, you would define a custom helper like: var disabled = customHelper(function(selector, options) { return $(selector).prop('disabled'); }); On version 1.x this can be represented as: import { findElement } from 'ember-cli-page-object/extend'; export default function disabled(selector, options = {}) { return { isDescriptor: true, get() { return findElement(this, selector, options).is(':disabled'); } } } Example usage: let page = PageObject.create({ scope: '.page', isAdmin: disabled('#override-name') }); page.isAdmin will look for elements in the DOM that match “.page #override-name” and check if they are disabled. Collections are now 0-based When we first implemented the collection function, we were using the nth-of-type CSS pseudo-class which is 1-based, so we though it would be clearer to also make collections 1-based. Later we decided to change to an implementation to use :eq, which is 0-based. We decided v1.0 was the moment to break compatibility and switch to 0-based collections. <table> <tbody> <tr> <td>Jane</td> </tr> <tr> <td>John</td> </tr> </tbody> </table> Example from the old v0.x syntax: const page = create({ users: collection({ itemScope: 'table tr', item: { name: text('td') } }) }); page.users(1).name(); // returns 'Jane' page.users(2).name(); // returns 'John' Example in v1.x syntax: const page = create({ users: collection({ itemScope: 'table tr', item: { name: text('td') } }) }); page.users(0).name; // returns 'Jane' page.users(1).name; // returns 'John' index option renamed to at and is 0-based In v0.x, the index option was used to reduce the set of matched elements to the one at the specified index which was 1-based. A small example from v0.x: const page = create({ secondTitle: text('h1', { index: 2 }) }); page.secondTitle(); // translates into $('h1:eq(1)').text() In v1.x this should be changed to: const page = create({ secondTitle: text('h1', { at: 1 }) }); page.secondTitle; // translates into $('h1:eq(1)').text() Remove parentheses when getting a value for a query or predicate In v1 we decided to go a step further on improving the code and polished the tree structure we already used when defining page objects. The Ceibo project was born (you can see it over here) which defines a simple way to create complex properties within an object. So for most cases properties used only to get a value will no longer need parentheses when accessed. const page = create({ scope: '#my-page', title: text('h1'), fillInName: fillable('#name') }); In v0.x the following code was used within tests: assert.equal(page.title(), 'My page title'); page.fillInName('Juan'); // fill #name with 'Juan' In v1.x this should be changed to: assert.equal(page.title, 'My page title'); page.fillInName('Juan'); // Doesn't change Scope and resetScope In v0.x defining the scope attribute on a page object used to override how the element was looked up in the DOM. Example: const page = create({ scope: '#my-page', title: text('h1'), fillInName: fillable('#name') modal: { scope: '#my-modal', title: PageObject.text('h3') } }); When running tests in v0.x: page.title(); // translates to `find('#my-page h1').text()` page.modal().title() // transaltes to `find('#my-modal h3').text()` In v1.0 we decided to implement scope inheritance, this means that if a component defines a scope and has a child component, the latter will inherit its parent scope. page.title; // translates into find('#my-page h1').text() page.modal.title // transaltes into find('#my-page #my-modalh3').text() In some scenarios, this change of behavior will not affect test assertions but in some cases, it will. If you want to make sure lookups work as in v0.x you can use the resetScope option (you can see more options on the documentation site). Changed definition to keep lookups the same: const page = create({ scope: '#my-page', title: text('h1'), fillInName: fillable('#name') modal: { scope: '#my-modal', resetScope: true, title: text('h3') } }); The multiple option Another cause of failure when upgrading to v1.x is that, by default, an error will be thrown if multiple elements match a query or predicate. For example, if the previous page object definition is used with the following template: <div class="#my-page"> <h1>My title</h1> <h1>My other title</h1> </div> This call will throw an error: page.title; // Kaboom! This behavior applies to every DOM lookup except count. If you need to match multiple elements you can use the multiple option on your properties. The resulting behavior will vary depending on the property. As an example, you can check how the multiple option behaves on the text property here. .visitable() The signature for .visitable() has changed. Instead of receiving two distinct object parameters (dynamic segments and query params) now it receives only one. The idea is to fill the dynamic segments first, using the values from the param object and then use the rest of the keys and values as query params. var page = create({ visit: visitable('/users/:user_id') }); page.visit({ user_id: 1, expanded: true }); // is equivalent to visit("/users/1?expanded=true"); .clickOnText() The behaviour of .clickOnText() has improved. When looking for elements to click (based on text), the property now considers the parent element as a valid element to click. This allows to do things like <div class="modal"> ... <button>Save</button><button>Cancel</button> var page = PageObject.create({ clickButton: clickOnText('button'), clickOn: clickOnText('.modal') }); // ... page.clickButton('Save'); page.clickOn('Save'); Before, the first action ( clickButton) would not have worked, only the second action would have found the element. Now, both actions work and both actions do click the same button.
http://ember-cli-page-object.js.org/docs/v1.16.x/migrating
CC-MAIN-2019-51
en
refinedweb
Simple pubsub pattern for asyncio applications Project description Simple pubsub pattern for asyncio applications. aiopubsub is only tested with Python 3.6. There are no plans to support older versions. import asyncio import aiopubsub import logwood async def main(): logwood.basic_config() hub = aiopubsub.Hub() publisher = aiopubsub.Publisher(hub, prefix = aiopubsub.Key('a')) subscriber = aiopubsub.Subscriber(hub, 'subscriber_id') sub_key = aiopubsub.Key('a', 'b', '*') subscriber.subscribe(sub_key) pub_key = aiopubsub.Key('b', 'c') publisher.publish(pub_key, 'Hello subscriber') await asyncio.sleep(0.001) # Let the callback fire. # "('a', 'b', 'c') Hello subscriber" will be printed. key, message = await subscriber.consume() assert key == aiopubsub.Key('a', 'b', 'c') assert message == 'Hello subscriber' subscriber.remove_all_listeners() asyncio.get_event_loop().run_until_complete(main()) or, instead of directly subscribing to the key, we can create a listener that will call a synchronous callback when a new message arrives. def print_message(key, message): print(key, message) subscriber.add_sync_listener(sub_key, print_message) Or, if we have a coroutine callback we can create an asynchronous listener: async def print_message(key, message): await asyncio.sleep(1) print(key, message) subscriber.add_async_listener(sub_key, print_message) Aiopubsub will use logwood if it is installed, otherwise it will default to the standard logging module. Note that logwood is required to run tests. Architecture Hub accepts messages from Publishers and routes them to Subscribers. Each message is routed by its Key - an iterable of strings forming a hierarchic namespace. Subscribers may subscribe to wildcard keys, where any part of the key may be replaced replaced with a * (star). addedSubscriber and removedSubscriber messages When a new subscriber is added the Hub sends this message { "key": ("key", "of", "added", "subscriber"), "currentSubscriberCount": 2 } under the key ('Hub', 'addedSubscriber', 'key', 'of', 'added', 'subscriber') (the part after addedSubscriber is made of the subscribed key). Note the currentSubscriberCount field indicating how many subscribers are currently subscribed. When a subscriber is removed a message in the same format is sent, but under the key ('Hub', 'removedSubscriber', 'key', 'of', 'added', 'subscriber'). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/aiopubsub/
CC-MAIN-2019-51
en
refinedweb
import "github.com/wvanbergen/kazoo-go" consumergroup.go kazoo.go topic_admin.go topic_metadata.go var ( ErrRunningInstances = errors.New("Cannot deregister a consumergroup with running instances") ErrInstanceAlreadyRegistered = errors.New("Consumer instance already registered") ErrInstanceNotRegistered = errors.New("Consumer instance not registered") ErrPartitionClaimedByOther = errors.New("Cannot claim partition: it is already claimed by another instance") ErrPartitionNotClaimed = errors.New("Cannot release partition: it is not claimed by this instance") ) var ( ErrTopicExists = errors.New("Topic already exists") ErrTopicMarkedForDelete = errors.New("Topic is already marked for deletion") ErrDeletionTimedOut = errors.New("Timed out while waiting for a topic to be deleted") ) var ( ErrInvalidPartitionCount = errors.New("Number of partitions must be larger than 0") ErrInvalidReplicationFactor = errors.New("Replication factor must be between 1 and the number of brokers") ErrInvalidReplicaCount = errors.New("All partitions must have the same number of replicas") ErrReplicaBrokerOverlap = errors.New("All replicas for a partition must be on separate brokers") ErrInvalidBroker = errors.New("Replica assigned to invalid broker") ErrMissingPartitionID = errors.New("Partition ids must be sequential starting from 0") ErrDuplicatePartitionID = errors.New("Each partition must have a unique ID") ) var ( FailedToClaimPartition = errors.New("Failed to claim partition for this consumer instance. Do you have a rogue consumer running?") ) BuildConnectionString builds a Zookeeper connection string for a list of nodes. Returns a string like "zk1:2181,zk2:2181,zk3:2181" ConnectionStringWithChroot builds a Zookeeper connection string for a list of nodes and a chroot. The chroot should start with "/". Returns a string like "zk1:2181,zk2:2181,zk3:2181/chroot" ParseConnectionString parses a zookeeper connection string in the form of host1:2181,host2:2181/chroot and returns the list of servers, and the chroot. type Config struct { // The chroot the Kafka installation is registerde under. Defaults to "". Chroot string // The amount of time the Zookeeper client can be disconnected from the Zookeeper cluster // before the cluster will get rid of watches and ephemeral nodes. Defaults to 1 second. Timeout time.Duration // Logger Logger zk.Logger } Config holds configuration values f. NewConfig instantiates a new Config struct with sane defaults. Consumergroup represents a high-level consumer that is registered in Zookeeper, CommitOffset commits an offset to a group/topic/partition func (cg *Consumergroup) Create() error Create registers the consumergroup in zookeeper func (cg *Consumergroup) Delete() error Delete removes the consumergroup from zookeeper func (cg *Consumergroup) Exists() (bool, error) Exists checks whether the consumergroup has been registered in Zookeeper FetchOffset retrieves all the commmitted offsets for a group FetchOffset retrieves an offset to a group/topic/partition func (cg *Consumergroup) Instance(id string) *ConsumergroupInstance Instance instantiates a new ConsumergroupInstance inside this consumer group, using an existing ID. func (cg *Consumergroup) Instances() (ConsumergroupInstanceList, error) Instances returns a map of all running instances inside this consumergroup. func (cg *Consumergroup) NewInstance() *ConsumergroupInstance NewInstance instantiates a new ConsumergroupInstance inside this consumer group, using a newly generated ID. func (cg *Consumergroup) PartitionOwner(topic string, partition int32) (*ConsumergroupInstance, error) PartitionOwner returns the ConsumergroupInstance that has claimed the given partition. This can be nil if nobody has claimed it yet. func (cg *Consumergroup) ResetOffsets() error func (cg *Consumergroup) Topics() (TopicList, error) Topics retrieves the list of topics the consumergroup has claimed ownership of at some point. func (cg *Consumergroup) WatchInstances() (ConsumergroupInstanceList, <-chan zk.Event, error) WatchInstances returns a ConsumergroupInstanceList, and a channel that will be closed as soon the instance list changes. func (cg *Consumergroup) WatchPartitionOwner(topic string, partition int32) (*ConsumergroupInstance, <-chan zk.Event, error) WatchPartitionOwner retrieves what instance is currently owning the partition, and sets a Zookeeper watch to be notified of changes. If the partition currently does not have an owner, the function returns nil for every return value. In this case is should be safe to claim the partition for an instance. ConsumergroupInstance represents an instance of a Consumergroup. func (cgi *ConsumergroupInstance) ClaimPartition(topic string, partition int32) error Claim claims a topic/partition ownership for a consumer ID within a group. If the partition is already claimed by another running instance, it will return ErrAlreadyClaimed. func (cgi *ConsumergroupInstance) Deregister() error Deregister removes the registration of the instance from zookeeper. func (cgi *ConsumergroupInstance) Register(topics []string) error Register registers the consumergroup instance in Zookeeper. func (cgi *ConsumergroupInstance) RegisterWithSubscription(subscriptionJSON []byte) error RegisterSubscription registers the consumer instance in Zookeeper, with its subscription. func (cgi *ConsumergroupInstance) Registered() (bool, error) Registered checks whether the consumergroup instance is registered in Zookeeper. func (cgi *ConsumergroupInstance) Registration() (*Registration, error) Registered returns current registration of the consumer group instance. func (cgi *ConsumergroupInstance) ReleasePartition(topic string, partition int32) error ReleasePartition releases a claim to a partition. func (cgi *ConsumergroupInstance) UpdateRegistration(topics []string) error UpdateRegistration updates a consumer group member registration. If the consumer group member has not been registered yet, then an error is returned. func (cgi *ConsumergroupInstance) WatchRegistration() (*Registration, <-chan zk.Event, error) WatchRegistered returns current registration of the consumer group instance, and a channel that will be closed as soon the registration changes. type ConsumergroupInstanceList []*ConsumergroupInstance ConsumergroupInstanceList implements the sortable interface on top of a consumer instance list func (cgil ConsumergroupInstanceList) Find(id string) *ConsumergroupInstance Find returns the consumergroup instance with the given ID if it exists in the list. Otherwise it will return `nil`. func (cgil ConsumergroupInstanceList) Len() int func (cgil ConsumergroupInstanceList) Less(i, j int) bool func (cgil ConsumergroupInstanceList) Swap(i, j int) type ConsumergroupList []*Consumergroup ConsumergroupList implements the sortable interface on top of a consumer group list func (cgl ConsumergroupList) Find(name string) *Consumergroup Find returns the consumergroup with the given name if it exists in the list. Otherwise it will return `nil`. func (cgl ConsumergroupList) Len() int func (cgl ConsumergroupList) Less(i, j int) bool func (cgl ConsumergroupList) Swap(i, j int) Kazoo interacts with the Kafka metadata in Zookeeper NewKazoo creates a new connection instance NewKazooFromConnectionString creates a new connection instance based on a zookeeer connection string that can include a chroot. BrokerList returns a slice of broker addresses that can be used to connect to the Kafka cluster, e.g. using `sarama.NewAsyncProducer()`. Brokers returns a map of all the brokers that make part of the Kafka cluster that is registered in Zookeeper. Close closes the connection with the Zookeeper cluster func (kz *Kazoo) Consumergroup(name string) *Consumergroup Consumergroup instantiates a new consumergroup. func (kz *Kazoo) Consumergroups() (ConsumergroupList, error) Consumergroups returns all the registered consumergroups Controller returns what broker is currently acting as controller of the Kafka cluster func (kz *Kazoo) CreateTopic(name string, partitionCount int, replicationFactor int, topicConfig map[string]string) error CreateTopic creates a new kafka topic with the specified parameters and properties DeleteTopic marks a kafka topic for deletion. Deleting a topic is asynchronous and DeleteTopic will return before Kafka actually does the deletion. DeleteTopicSync marks a kafka topic for deletion and waits until it is deleted before returning. Topic returns a Topic instance for a given topic name Topics returns a list of all registered Kafka topics. WatchTopics returns a list of all registered Kafka topics, and watches that list for changes. Partition interacts with Kafka's partition metadata in Zookeeper. ISR returns the broker IDs of the current in-sync replica set for the partition Key returns a unique identifier for the partition, using the form "topic/partition". Leader returns the broker ID of the broker that is currently the leader for the partition. PreferredReplica returns the preferred replica for this partition. Topic returns the Topic of this partition. PartitionList is a type that implements the sortable interface for a list of Partition instances func (pl PartitionList) Len() int func (pl PartitionList) Less(i, j int) bool func (pl PartitionList) Swap(i, j int) const ( RegPatternStatic RegPattern = "static" RegPatternWhiteList RegPattern = "white_list" RegPatternBlackList RegPattern = "black_list" ) const ( RegDefaultVersion RegVersion = 1 ) type Registration struct { Pattern RegPattern `json:"pattern"` Subscription map[string]int `json:"subscription"` Timestamp int64 `json:"timestamp"` Version RegVersion `json:"version"` } Topic interacts with Kafka's topic metadata in Zookeeper. Config returns topic-level configuration settings as a map. Exists returns true if the topic exists on the Kafka cluster. Partition returns a Partition instance for the topic. func (t *Topic) Partitions() (PartitionList, error) Partitions returns a list of all partitions for the topic. Watch watches the topic for changes. WatchPartitions returns a list of all partitions for the topic, and watches the topic for changes. TopicList is a type that implements the sortable interface for a list of Topic instances. Find returns the topic with the given name if it exists in the topic list, and will return `nil` otherwise. Package kazoo imports 13 packages (graph) and is imported by 59 packages. Updated 2018-02-02. Refresh now. Tools for package owners.
https://godoc.org/github.com/wvanbergen/kazoo-go
CC-MAIN-2019-51
en
refinedweb
- Type: Bug - Status: Closed (View Workflow) - Priority: Major - Resolution: Rejected - Affects Version/s: 11.0.0.CR1 - Fix Version/s: None - - Labels: - Environment: WFLY 11.0.0.CR1, JDK8 Wildlfy logs a warning "WFLYEJB0463: Invalid transaction attribute type REQUIRED on SFSB lifecycle method Method postConstruct() of class class com.test.Test, valid types are REQUIRES_NEW and NOT_SUPPORTED. Method will be treated as NOT_SUPPORTED." for @Stateful-components with CDI lifecycle annotations, even if the Class does not define any Transaction-Attributes: @Named @SessionScoped @Stateful public class Test implements Serializable { private static final long serialVersionUID = -2055975290009863989L; @PostConstruct private void postConstruct() {} } The warning is only created when the @PostConstruct-method is on the class itself and not inherited (which does not make any difference). If the class extends another class, which declares a @PostConstruct-method, the warning does not appear.
https://issues.redhat.com/browse/WFLY-9277
CC-MAIN-2019-51
en
refinedweb
Proposed exercise. Output Solution using System; using System.IO; class InvertText { static void Main() { Console.Write("Enter name file: "); string name = Console.ReadLine(); if (File.Exists(name)) { StreamReader file = File.OpenText(name); string line; int count=0; do { line = file.ReadLine(); if (line != null) count++; } while (line != null); file.Close(); string[] lines = new string[count]; count = 0; line = ""; file = File.OpenText(name); do { line = file.ReadLine(); if (line != null) { lines[count] = line; count++; } } while (line != null); file.Close(); StreamWriter myfileWr = File.CreateText(name + ".dat"); for (int i = lines.Length - 1; i > 0; i--) myfileWr.WriteLine( lines[i] ); myfileWr.Close(); } else Console.WriteLine("Error"); } }
https://www.exercisescsharp.com/2013/04/812-invert-text-file.html
CC-MAIN-2019-51
en
refinedweb
Introduction We will see here an example on simple log4j configuration in Java. The purpose of inserting log statements into the code is a low-tech method for debugging it. It may also be the only way because debuggers are not always available or applicable. This is often the case for distributed applications. Features of Log4j - We can enable logging at runtime without modifying the application binary. - We can control the behavior of logging by editing only the configuration file, no need to touch the application binary. - Developer are always clear with detailed context for application failures. - Log4j has one of the distinctive features – the notion of inheritance. Using this logger hierarchy feature we are able to control the log statements output at arbitrarily fine granularity but also at great ease. This helps to reduce the volume of logged output and the cost of logging. - We can set many targets for the log output such as a file, an OutputStream, a java.io.Writer, a remote log4j server, a remote Unix Syslog daemon, or many other output targets. Logging Levels FATAL: shows messages at a FATAL level only. ERROR: Shows messages classified as ERROR and FATAL levels. WARNING: Shows messages classified as WARNING, ERROR, and FATAL levels. INFO: Shows messages classified as INFO, WARNING, ERROR, and FATAL levels. DEBUG: Shows messages classified as DEBUG, INFO, WARNING, ERROR, and FATAL levels. ALL – The ALL Level has the lowest possible rank and is intended to turn on all logging. OFF – The OFF Level has the highest possible rank and is intended to turn off logging. For more information please go through Apache Log4j. Example Log4j Configuration I am giving here simple logging example using log4j.xml configuration. This will log the messages to the console as well as to file. I have used here RollingFileAppender so that we can have multiple backup files. You can set maximum number of log files using property MaxBackupIndex. You can specify maximum size of each file using MaxFileSize. I have used ConsoleAppender for showing output into the console. You can format log pattern output using PatternLayout class. You can specify root log level for all loggers or you can also specify individually. This log4j.xml file is put into the classpath i.e. src directory. <> <appender name="RFA" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="sample.log" /> <param name="ImmediateFlush" value="true" /> <param name="MaxFileSize" value="5KB" /> <param name="MaxBackupIndex" value="5" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-4r [%t] %-5p %c %x - %m%n" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref <appender-ref </root> </log4j:configuration> Writing Java Class Write a simple java class for testing it. We need log4j.jar file to be put into the classpath. package in.sblog.log4j.test; import org.apache.log4j.Logger; import org.apache.log4j.xml.DOMConfigurator; public class TestLog { private static Logger logger = Logger.getLogger(TestLog.class); public static void main(String[] args) { DOMConfigurator.configure("src/log4j.xml"); logger.debug("log4j configuration is successful."); } } Output Console 0 [main] DEBUG in.sblog.log4j.test.TestLog - log4j configuration is successful. File 0 [main] DEBUG in.sblog.log4j.test.TestLog - log4j configuration is successful. The log file is created under project root directory and the name of the log file is sample.log. That’s all. Thanks for reading.
https://www.roytuts.com/simple-log4j-configuration-in-java/
CC-MAIN-2019-51
en
refinedweb
Macro: Delete lines that contains text in clipboard Hi, I want to create a simple macro to delete the lines that contains the string that is stored in the clipboard (when is executed). The process that I follow is: - Start recording macro - Open the find window and paste wanted text XXX into the Mark section. Then mark the lines. - Execute the option “Delete marked lines”. The problem is when I change the content of the string to YYY (clipboard), the Macro keeps marking and deleting the string I’ve used to record the macro (XXX). How can I change the macro to use the content of the clipboard instead of the static string? Thank you The way it works is that only when you “run” a “find”-related function, that is when all of the settings (Find’s edit boxes, checkboxes, etc) get captured into the macro. Note that “run” here means “press a button to do a find operation”. So you can be macro recording and change all of the individual settings all you want but nothing is recorded until you actually press the (e.g.'s) Find Next button or the Find All button. So…in your case, when you paste from the clipboard into the Find what zone, that paste action isn’t recorded into the macro…but at some point farther on in your actions the text from the paste is. Thus the text becomes hardcoded as a constant into the macro. Thank you. So, is there a way to reload that string text each time? I’ve found the shortcuts.xml file with the macro code, but I don’t know which commands to use (if they exist) . is there a way to reload that string text each time AFAIK, no. shortcuts.xml gets read into memory when Notepad++ starts; no (reasonable!) way to “intercept” that data item to insert something else…which I think is what you are considering…?? @Scott-Sumner said: AFAIK, no. shortcuts.xml gets read into memory when Notepad++ starts; no (reasonable!) way to “intercept” that data item to insert something else…which I think is what you are considering…?? Exactly. That’s why I wanted to read the data from clipboard dynamically. Thank you anyway I’ve found a possible solution without using the macro feature of Notepad++ but it takes some effort. You need to install the NppExec Notepad++ plugin (available via PluginManager). You need a helper program - NirCmdC by Nir Sofer. It is available here, packaged with the GUI version (NirCmd) in a ZIP file (scroll down to the end of the site to find the download link). Unpack the ZIP file of NirCmdC to a folder of your choice. Go to Menu Plugins -> NppExec -> Execute...and paste the following code to the Command(s)area: set NirCmdPath=<Path-to-the-folder-of-step-3>\nircmdc.exe set SearchAndMarkDialogTitle=<Title-of-Npp’s-Mark-window-(depends on your locale, it’s “Mark” in english)> npp_sendmsg WM_COMMAND IDM_SEARCH_MARK “$(NirCmdPath)” sendkeypress ctrl+v “$(NirCmdPath)” sendkeypress enter “$(NirCmdPath)” win close title “$(SearchAndMarkDialogTitle)” npp_sendmsg WM_COMMAND IDM_SEARCH_DELETEMARKEDLINES unset SearchAndMarkDialogTitle unset NirCmdPath Perform the required changes in lines 1 and 2 and save the code e.g. as DelLinesContainingClipboardContent. Go to Menu Plugins -> NppExec -> Advanced options.... Here you can associate the script of step 5 to a new entry in the Macrosmenu. Go to Menu Settings -> Shortcut Mapper...and assign the menu entry to a keyboard shortcut of your choice. Go to Menu Search -> Mark...and check the options Bookmark lineand Purge for each search. Maybe it is possible to automate this using NirCmdCbut this is your “homework” ;-). Now you can select the word you are searching for, hit CTRL+C, hit the keyboard shortcut of step 7, and all the lines containing the selected word will be deleted. - Vitaliy Dovgan last edited by Wow, this is very nice! The script can be simplified a little: set local NirCmdPath = <Path-to-the-folder-of-step-3>\nircmdc.exe npp_sendmsg WM_COMMAND IDM_SEARCH_MARK // show the Mark dialog "$(NirCmdPath)" sendkeypress ctrl+v // paste from the clipboard "$(NirCmdPath)" sendkeypress enter // apply - i.e. mark all "$(NirCmdPath)" sendkeypress esc // close the Mark dialig npp_sendmsg WM_COMMAND IDM_SEARCH_DELETEMARKEDLINES // delete the lines // don't forget to check "Bookmark line" and "Purge for each search" for that! Nice optimization. For the script to work on ALL lines of course option Wrap aroundmust be checked too. Well if we are opening it up to scripts, then I’ll offer up the following Pythonscript; I call it DeleteLinesWithClipboardText.py: def DLWCT__main(): if not editor.getSelectionEmpty(): return # keep it simple; maybe prompt user to cancel selection and re-run... if not editor.canPaste(): return # keep it simple; maybe prompt user that no text is in clipboard... orig_document_len = editor.getTextLength() editor.beginUndoAction() editor.paste() pasted_text_len = editor.getTextLength() - orig_document_len curr_pos = editor.getCurrentPos() clipboard_text = editor.getTextRange(curr_pos - pasted_text_len, curr_pos) editor.deleteRange(curr_pos - pasted_text_len, pasted_text_len) if '\r' in clipboard_text or '\n' in clipboard_text: # keep it simple; don't allow multiline search text editor.endUndoAction() return del_line_list = [] find_start_pos = 0 while True: f = editor.findText(FINDOPTION.MATCHCASE, find_start_pos, editor.getTextLength(), clipboard_text) if f == None: break (found_start_pos, found_end_pos) = f line_nbr = editor.lineFromPosition(found_start_pos) if len(del_line_list) > 0: if del_line_list[0] != line_nbr: del_line_list.insert(0, line_nbr) else: del_line_list.append(line_nbr) find_start_pos = found_end_pos for line in del_line_list: editor.deleteLine(line) editor.endUndoAction() DLWCT__main() The only tricky part is in getting the clipboard data. I paste the clipboard contents into the document, read what was just pasted, and then delete those contents. After that it is a simple matter to locate the desired text and delete the lines it occurs on.
https://community.notepad-plus-plus.org/topic/15804/macro-delete-lines-that-contains-text-in-clipboard/8
CC-MAIN-2019-51
en
refinedweb
CMake: the Case when the Project's Quality is Unforgivable CMake is a cross-platform system for automating project builds. This system is much older than the PVS-Studio static code analyzer, but no one has tried to apply the analyzer on its code and review the errors. As it turned out, there are a lot of them. The CMake audience is huge. New projects start on it and old ones are ported. I shudder to think of how many developers could have had any given error. Introduction CMake is a cross-platform system for automating software building from source code. CMake isn't meant directly for building, it only generates files to control a build from CMakeLists.txt files. The first release of the program took place in 2000. For comparison, the PVS-Studio analyzer appeared only in 2008. At that time, it was aimed at searching for bugs resulted from porting 32-bit systems to 64-bit ones. In 2010, the first set of general purpose diagnostics appeared (V501-V545). By the way, the CMake code has a few warnings from this first set. Unforgivable Errors The V1040 diagnostic was implemented not so long ago. Most likely, at the time of posting the article, it won't be released yet, nevertheless, we already found a cool error with its help. There's a typo made in the name __MINGW32_. At the end, one underline character is missing. If you search the code with this name, you can see that the version with two underline characters on both sides is used in the project: V531 It is odd that a sizeof() operator is multiplied by sizeof(). cmGlobalVisualStudioGenerator.cxx 558 bool IsVisualStudioMacrosFileRegistered(const std::string& macrosFile, const std::string& regKeyBase, std::string& nextAvailableSubKeyName) { .... if (ERROR_SUCCESS == result) { wchar_t subkeyname[256]; // <= DWORD cch_subkeyname = sizeof(subkeyname) * sizeof(subkeyname[0]); // <= wchar_t keyclass[256]; DWORD cch_keyclass = sizeof(keyclass) * sizeof(keyclass[0]); FILETIME lastWriteTime; lastWriteTime.dwHighDateTime = 0; lastWriteTime.dwLowDateTime = 0; while (ERROR_SUCCESS == RegEnumKeyExW(hkey, index, subkeyname, &cch_subkeyname, 0, keyclass, &cch_keyclass, &lastWriteTime)) { .... } .... } For a statically declared array, the sizeof operator will calculate size in bytes, taking into account the number of elements and their size. When evaluating the value of the cch_subkeyname variable, a developer didn't take it into account and got a value 4 times greater than intended. Let's explain where «four times» come from. The array and its wrong size is passed to the function RegEnumKeyExW: LSTATUS RegEnumKeyExW( HKEY hKey, DWORD dwIndex, LPWSTR lpName, // <= subkeyname LPDWORD lpcchName, // <= cch_subkeyname LPDWORD lpReserved, LPWSTR lpClass, LPDWORD lpcchClass, PFILETIME lpftLastWriteTime ); The lpcchName pointer must point to the variable, containing the buffer size in characters: «A pointer to a variable that specifies the size of the buffer specified by the lpClass parameter, in characters». The subkeyname array size is 512 bytes and can store 256 characters of the wchar_t type (in Windows, wchar_t is 2 bytes). It is 256 that should be passed to the function. Instead, 512 is multiplied by 2 and we get 1024. I think, it's clear now how to correct this error. You need to use division instead of multiplication: DWORD cch_subkeyname = sizeof(subkeyname) / sizeof(subkeyname[0]); By the way, the same error occurs when evaluating the value of the cch_keyclass variable. The error described can potentially lead to buffer overflow. All such fragments definitely have to be corrected: - V531 It is odd that a sizeof() operator is multiplied by sizeof(). cmGlobalVisualStudioGenerator.cxx 556 - V531 It is odd that a sizeof() operator is multiplied by sizeof(). cmGlobalVisualStudioGenerator.cxx 572 - V531 It is odd that a sizeof() operator is multiplied by sizeof(). cmGlobalVisualStudioGenerator.cxx 621 - V531 It is odd that a sizeof() operator is multiplied by sizeof(). cmGlobalVisualStudioGenerator.cxx 622 - V531 It is odd that a sizeof() operator is multiplied by sizeof(). cmGlobalVisualStudioGenerator.cxx 649 V595 The 'this->BuildFileStream' pointer was utilized before it was verified against nullptr. Check lines: 133, 134. cmMakefileTargetGenerator.cxx 133 void cmMakefileTargetGenerator::CreateRuleFile() { .... this->BuildFileStream->SetCopyIfDifferent(true); if (!this->BuildFileStream) { return; } .... } The pointer this->BuildFileStream is dereferenced right before the check for its validity. Didn't that cause any problems for anyone? Below there is another example of such snippet. It's made just like a carbon copy. But in fact, there are a lot of V595 warnings and most of them are not so obvious. From my experience, I can say that correcting warnings of this diagnostic takes the longest time. - V595 The 'this->FlagFileStream' pointer was utilized before it was verified against nullptr. Check lines: 303, 304. cmMakefileTargetGenerator.cxx 303 V614 Uninitialized pointer 'str' used. cmVSSetupHelper.h 80 class SmartBSTR { public: SmartBSTR() { str = NULL; } SmartBSTR(const SmartBSTR& src) { if (src.str != NULL) { str = ::SysAllocStringByteLen((char*)str, ::SysStringByteLen(str)); } else { str = ::SysAllocStringByteLen(NULL, 0); } } .... private: BSTR str; }; The analyzer detected usage of the uninitialized str pointer. It appeared due to an ordinary typo. When calling the SysAllocStringByteLen function, one should have used the src.str pointer. V557 Array overrun is possible. The value of 'lensymbol' index could reach 28. archive_read_support_format_rar.c 2749 static int64_t expand(struct archive_read *a, int64_t end) { .... if ((lensymbol = read_next_symbol(a, &rar->lengthcode)) < 0) goto bad_data; if (lensymbol > (int)(sizeof(lengthbases)/sizeof(lengthbases[0]))) goto bad_data; if (lensymbol > (int)(sizeof(lengthbits)/sizeof(lengthbits[0]))) goto bad_data; len = lengthbases[lensymbol] + 2; if (lengthbits[lensymbol] > 0) { if (!rar_br_read_ahead(a, br, lengthbits[lensymbol])) goto truncated_data; len += rar_br_bits(br, lengthbits[lensymbol]); rar_br_consume(br, lengthbits[lensymbol]); } .... } This piece of code hides several problems at once. When accessing lengthbases and lengthbits arrays, an array index might go out of bounds, as developers wrote the '>' operator instead of '>=' above. This check began to miss one unacceptable value. Here we have nothing but a classic error pattern called Off-by-one Error. Here's the entire list of array access operations by a non-valid index: - V557 Array overrun is possible. The value of 'lensymbol' index could reach 28. archive_read_support_format_rar.c 2750 - V557 Array overrun is possible. The value of 'lensymbol' index could reach 28. archive_read_support_format_rar.c 2751 - V557 Array overrun is possible. The value of 'lensymbol' index could reach 28. archive_read_support_format_rar.c 2753 - V557 Array overrun is possible. The value of 'lensymbol' index could reach 28. archive_read_support_format_rar.c 2754 - V557 Array overrun is possible. The value of 'offssymbol' index could reach 60. archive_read_support_format_rar.c 2797 Memory Leak V773 The function was exited without releasing the 'testRun' pointer. A memory leak is possible. cmCTestMultiProcessHandler.cxx 193 void cmCTestMultiProcessHandler::FinishTestProcess(cmCTestRunTest* runner, bool started) { .... delete runner; if (started) { this->StartNextTests(); } } bool cmCTestMultiProcessHandler::StartTestProcess(int test) { .... cmCTestRunTest* testRun = new cmCTestRunTest(*this); // <= .... if (testRun->StartTest(this->Completed, this->Total)) { return true; // <= } } this->FinishTestProcess(testRun, false); // <= return false; } The analyzer detected a memory leak. The memory by the testRun pointer isn't released, if the function testRun->StartTest returns true. When executing another code branch, this memory gets released in the function this-> FinishTestProcess. Resource Leak V773 The function was exited without closing the file referenced by the 'fd' handle. A resource leak is possible. rhash.c 450 RHASH_API int rhash_file(....) { FILE* fd; rhash ctx; int res; hash_id &= RHASH_ALL_HASHES; if (hash_id == 0) { errno = EINVAL; return -1; } if ((fd = fopen(filepath, "rb")) == NULL) return -1; if ((ctx = rhash_init(hash_id)) == NULL) return -1; // <= fclose(fd); ??? res = rhash_file_update(ctx, fd); fclose(fd); rhash_final(ctx, result); rhash_free(ctx); return res; } Strange Logic in Conditions V590 Consider inspecting the '* s != '\0' && * s == ' '' expression. The expression is excessive or contains a misprint. archive_cmdline.c 76 static ssize_t get_argument(struct archive_string *as, const char *p) { const char *s = p; archive_string_empty(as); /* Skip beginning space characters. */ while (*s != '\0' && *s == ' ') s++; .... } *s character comparison with null is redundant. The condition of the while loop depends only on whether the character is equal to a space or not. This is not an error, but an unnecessary complication of the code. V592 The expression was enclosed by parentheses twice: ((expression)). One pair of parentheses is unnecessary or misprint is present. cmCTestTestHandler.cxx 899 void cmCTestTestHandler::ComputeTestListForRerunFailed() { this->ExpandTestsToRunInformationForRerunFailed(); ListOfTests finalList; int cnt = 0; for (cmCTestTestProperties& tp : this->TestList) { cnt++; // if this test is not in our list of tests to run, then skip it. if ((!this->TestsToRun.empty() && std::find(this->TestsToRun.begin(), this->TestsToRun.end(), cnt) == this->TestsToRun.end())) { continue; } tp.Index = cnt; finalList.push_back(tp); } .... } The analyzer warns that the negation operation probably should be taken out of brackets. It seems that there is no such a bug here — just unnecessary double brackets. But most likely, there is a logic error in the code. The continue operator is executed only in the case if the list of tests this->TestsToRun isn't empty and cnt is absent in it. It is reasonable to assume that if the tests list is empty, the same action needs to take place. Most probably, the condition should be as follows: if (this->TestsToRun.empty() || std::find(this->TestsToRun.begin(), this->TestsToRun.end(), cnt) == this->TestsToRun.end()) { continue; } V592 The expression was enclosed by parentheses twice: ((expression)). One pair of parentheses is unnecessary or misprint is present. cmMessageCommand.cxx 73 bool cmMessageCommand::InitialPass(std::vector<std::string> const& args, cmExecutionStatus&) { .... } else if (*i == "DEPRECATION") { if (this->Makefile->IsOn("CMAKE_ERROR_DEPRECATED")) { fatal = true; type = MessageType::DEPRECATION_ERROR; level = cmake::LogLevel::LOG_ERROR; } else if ((!this->Makefile->IsSet("CMAKE_WARN_DEPRECATED") || this->Makefile->IsOn("CMAKE_WARN_DEPRECATED"))) { type = MessageType::DEPRECATION_WARNING; level = cmake::LogLevel::LOG_WARNING; } else { return true; } ++i; } .... } It's a similar example, but this time I'm more confident that an error takes place. The function IsSet(«CMAKE_WARN_DEPRECATED») checks that the value CMAKE_WARN_DEPRECATED is set globally, and the function IsOn(«CMAKE_WARN_DEPRECATED») checks that the value is set in the project configuration. Most likely, the complementary operator is redundant, as in both cases, it's correct to set same values of type and level. V728 An excessive check can be simplified. The '(A && !B) || (!A && B)' expression is equivalent to the 'bool(A) != bool(B)' expression. cmCTestRunTest.cxx 151 bool cmCTestRunTest::EndTest(size_t completed, size_t total, bool started) { .... } else if ((success && !this->TestProperties->WillFail) || (!success && this->TestProperties->WillFail)) { this->TestResult.Status = cmCTestTestHandler::COMPLETED; outputStream << " Passed "; } .... } This code can be simpler. One can rewrite the conditional expression in the following way: } else if (success != this->TestProperties->WillFail) { this->TestResult.Status = cmCTestTestHandler::COMPLETED; outputStream << " Passed "; } A few more places to simplify: - V728 An excessive check can be simplified. The '(A && B) || (!A && !B)' expression is equivalent to the 'bool(A) == bool(B)' expression. cmCTestTestHandler.cxx 702 - V728 An excessive check can be simplified. The '(A && !B) || (!A && B)' expression is equivalent to the 'bool(A) != bool(B)' expression. digest_sspi.c 443 - V728 An excessive check can be simplified. The '(A && !B) || (!A && B)' expression is equivalent to the 'bool(A) != bool(B)' expression. tcp.c 1295 - V728 An excessive check can be simplified. The '(A && !B) || (!A && B)' expression is equivalent to the 'bool(A) != bool(B)' expression. testDynamicLoader.cxx 58 - V728 An excessive check can be simplified. The '(A && !B) || (!A && B)' expression is equivalent to the 'bool(A) != bool(B)' expression. testDynamicLoader.cxx 65 - V728 An excessive check can be simplified. The '(A && !B) || (!A && B)' expression is equivalent to the 'bool(A) != bool(B)' expression. testDynamicLoader.cxx 72 Various Warnings V523 The 'then' statement is equivalent to the subsequent code fragment. archive_read_support_format_ar.c 415 static int _ar_read_header(struct archive_read *a, struct archive_entry *entry, struct ar *ar, const char *h, size_t *unconsumed) { .... /* * "__.SYMDEF" is a BSD archive symbol table. */ if (strcmp(filename, "__.SYMDEF") == 0) { archive_entry_copy_pathname(entry, filename); /* Parse the time, owner, mode, size fields. */ return (ar_parse_common_header(ar, entry, h)); } /* * Otherwise, this is a standard entry. The filename * has already been trimmed as much as possible, based * on our current knowledge of the format. */ archive_entry_copy_pathname(entry, filename); return (ar_parse_common_header(ar, entry, h)); } The expression in the last condition is similar to the last two lines of the function. A developer can simplify this code by removing the condition, or there is an error in the code and it should be fixed. V535 The variable 'i' is being used for this loop and for the outer loop. Check lines: 2220, 2241. multi.c 2241 static CURLMcode singlesocket(struct Curl_multi *multi, struct Curl_easy *data) { .... for(i = 0; (i< MAX_SOCKSPEREASYHANDLE) && // <= (curraction & (GETSOCK_READSOCK(i) | GETSOCK_WRITESOCK(i))); i++) { unsigned int action = CURL_POLL_NONE; unsigned int prevaction = 0; unsigned int comboaction; bool sincebefore = FALSE; s = socks[i]; /* get it from the hash */ entry = sh_getentry(&multi->sockhash, s); if(curraction & GETSOCK_READSOCK(i)) action |= CURL_POLL_IN; if(curraction & GETSOCK_WRITESOCK(i)) action |= CURL_POLL_OUT; actions[i] = action; if(entry) { /* check if new for this transfer */ for(i = 0; i< data->numsocks; i++) { // <= if(s == data->sockets[i]) { prevaction = data->actions[i]; sincebefore = TRUE; break; } } } .... } The i variable is used as a loop counter in the outer and inner loops. At the same time, the value of the counter again begins from zero in the inner loop. It might not be a bug here, but the code is suspicious. V519 The 'tagString' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 84, 86. cmCPackLog.cxx 86 void cmCPackLog::Log(int tag, const char* file, int line, const char* msg, size_t length) { .... if (tag & LOG_OUTPUT) { output = true; display = true; if (needTagString) { if (!tagString.empty()) { tagString += ","; } tagString = "VERBOSE"; } } if (tag & LOG_WARNING) { warning = true; display = true; if (needTagString) { if (!tagString.empty()) { tagString += ","; } tagString = "WARNING"; } } .... } The tagString variable is overwritten with a new value in all places. It's hard to say what's the issue or why they did it. Perhaps, the '=' and '+=' operators were muddled. The entire list of such places: - V519 The 'tagString' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 94, 96. cmCPackLog.cxx 96 - V519 The 'tagString' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 104, 106. cmCPackLog.cxx 106 - V519 The 'tagString' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 114, 116. cmCPackLog.cxx 116 - V519 The 'tagString' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 125, 127. cmCPackLog.cxx 127 V519 The 'aes->aes_set' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 4052, 4054. archive_string.c 4054 int archive_mstring_copy_utf8(struct archive_mstring *aes, const char *utf8) { if (utf8 == NULL) { aes->aes_set = 0; // <= } aes->aes_set = AES_SET_UTF8; // <= .... return (int)strlen(utf8); } Forced setting of the AES_SET_UTF8 value looks suspicious. I think such code will confuse any developer, who comes to refining this fragment. This code was copied to another place: - V519 The 'aes->aes_set' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 4066, 4068. archive_string.c 4068 How to Find Bugs in a Project on CMake In this section, I'll briefly tell you how to check CMake projects with PVS-Studio as easy as one-two-three. Windows/Visual Studio For Visual Studio, you can generate a project file using CMake GUI or the following command: cmake -G "Visual Studio 15 2017 Win64" .. Next, you can open the .sln file and check the project using the plugin for Visual Studio. Linux/macOS The file compile_commands.json is used for checks on these systems. By the way, it can be generated in different build systems. This is how you do it in CMake: cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=On .. The last thing to do is run the analyzer in the directory with the .json file: pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic -o /path/to/project.log -e /path/to/exclude-path -j<N> We have also developed a module for CMake projects. Some people like using it. CMake module and examples of its usage can be found in our repository on GitHub: pvs-studio-cmake-examples. Conclusion A huge audience of CMake users is great to test the project, but many issues could be prevented before the release by using static code analysis tools, such as PVS-Studio. If you liked the analyzer results, but your project isn't written in C and C++, I'd like to remind that the analyzer also supports analysis of projects in C# and Java. You can test the analyzer on your project by going to this page. Only users with full accounts can post comments. Log in, please.
https://habr.com/en/company/pvs-studio/blog/464145/
CC-MAIN-2019-51
en
refinedweb
Simple pupile function calculations Note: This is a remake of my original post on pupil functions. I decided to write this remake to fix what I perceived to be some errors in thinking and to better address the problem of units in some of the physical quantitities. For the original, click here. A pupil function is a theoretical tool for characterizing an imaging system. In simple terms, it is a mathematical model for any general arrangement of lenses and mirrors used to collect light from an object and form an image of that object in some other plane. A few of the reasons why pupil functions are useful include: - they reduce a complicated optical system--such as a microscope--to a relatively simple, two-dimensional, and complex-valued function; - they provide a convenient way to represent the aberrations present in the system; - they are easy to simulate on a computer using fast Fourier transforms. In this post I will show you how to write a simple program that computes the image of an isotropic point source of light from an optical system's pupil function. Theoretical background¶ In scalar diffraction theory, light is represented as a three-dimensional function known as the scalar field. At every point in space $ \mathbf{r} $, the scalar field $ u \left( \mathbf{r} \right) $ is a single, complex value that represents the electric field at that point. The physical laws of diffraction require that the scalar field be described by two numbers, an amplitude and a phase, that are derived from the field's real and the imaginary parts, $ \text{Re} \left[ u \left( \mathbf{r} \right) \right] $ and $ \text{Im} \left[ u \left( \mathbf{r} \right) \right] $:\begin{align*} A &= \sqrt{\text{Re} \left[ u \left( \mathbf{r} \right) \right]^2 + \text{Im} \left[ u \left( \mathbf{r} \right) \right]^2 } \\ \phi &= \arctan \left( \frac{\text{Im} \left[ u \left( \mathbf{r} \right) \right]}{\text{Re} \left[ u \left( \mathbf{r} \right) \right]} \right) \end{align*} If we know the amplitude and phase at a given point, then we know the scalar field at that point. Despite the fact that scalar diffraction theory ignores the polarization of light, it does wonderfully well at describing a large range of optical phenomena. For most problems in imaging, we don't really need to know the three-dimensional distribution of the field in all of space. Instead, we simplify the problem by asking how an optical system transforms the field in some two-dimensional object plane into a new field distribution in the image plane (see the figure below). Any changes in scale between these two planes are caused by the system's magnification; any blurring or distortion is caused by diffraction and possibly aberrations. The pupil function is the two-dimensional Fourier transform of the scalar field in the image plane when the object is a point source emitting light equally in all directions, i.e. an isotropic point source. Mathematically, the pupil function is written as\begin{equation*} P \left(f_x, f_y \right) = \frac{1}{A}\iint_{-\infty}^{\infty} \text{PSF}_A \left( x, y \right) \exp \left[ -j 2 \pi \left( f_x x + f_y y\right) \right] \, dx \, dy \end{equation*} where $ A $ is a normalizing constant, $ f_x $ and $ f_y $ represent spatial frequencies in the x- and y-directions, and $ j $ is the imaginary number. $ \text{PSF}_A \left( x, y \right) $ is known as the amplitude point spread function. Despite the intimidating name, $ \text{PSF}_A \left( x, y \right) $ is just the scalar field from the isotropic point source in the image plane. The pupil function and the amplitude point spread function form a Fourier transform pair, so we can also write\begin{equation*} \text{PSF}_A \left(x, y \right) = A \iint_{-\infty}^{\infty} P \left( f_x, f_y \right) \exp \left[ j 2 \pi \left( f_x x + f_y y\right) \right] \, df_x \, df_y \end{equation*} What all of this means is that we can compute the image of an on-axis, isotropic point source if we know the pupil function that describes the system: compute the two-dimensional Fourier transform of the pupil function and voilà, you have the image (or at least the field that will form the image). The pupil punction is dimensionless; $ \text{PSF}_A $ is a field¶ By convention the pupil function is a dimensionless complex number and has a magnitude between 0 and 1. The amplitude PSF, however, has units of electric field, or Volts per distance, $ V / m $. If you perform a dimensional analysis on the Fourier transform expressions above, you can see that the normalizing constant $ A $ has to have units of $ V \times m $. For example, if $ \text{PSF}_A $ has units of $ V / m $ and $ dx $ and $ dy $ both have units of $ m $, then $ A $ has units of $ V \times m $ and the pupil function is therefore dimensionless. Sometimes in the literature you will find that units or a normalizing constant are ignored or, worse, that the reader can sort of "fill them in" later. I prefer to be explicit and to define the $ \text{PSF}_A $ to be a field. Pupil functions are not entrance or exit pupils¶ The pupil function and the pupil planes of a system are not the same thing. The entrance and exit pupils--which together are known as the pupil planes--are the planes in which the images of the system's aperture stop are located. The pupil function, however, is not an image of the aperture stop of the system; it's a 2D Fourier transform of a field. There is never-the-less a relationship between the pupil function and the plane of the exit pupil. The pupil function represents the relative amplitude and phase of the field on the surface of a so-called reference sphere that intersects the optics axis in the plane of the system's exit pupil. Pupil function simulations in python¶ The goal of this simulation will be simple: given a pupil function, a single wavelength, an optical system with a numerical aperture NA, and an amount of power that passes through the image plane, compute the image of an on-axis isotropic point source. There are only a few steps needed to achieve our goal: - define the simulation's input parameters; - setup the image plane and pupil plane coordinate system; - create the pupil plane and normalize it so that the field carries the desired amount of power; - and compute the field in the image plane. Before we go further, it's worth pointing out that the pupil function and $ \text{PSF}_A $ are obtained by calculating the continuous Fourier transform of one another. On a computer, however, it's often easiest to compute what's called a discrete Fourier tranform via the fast Fourier transform (FFT) algorithm. The continuous Fourier transform and the FFT are not, strictly speaking, the same thing. Therefore, we should expect from the start that there may be small differences between the computed $ \text{PSF}_A $ and the analytical calculation. With this in mind, we'll start by importing a few scientific libraries like Numpy and Scipy. %pylab inline import sys from numpy.fft import fft2, fftshift import scipy from scipy.integrate import simps import seaborn as sns # Used only to set up plots sns.set_context(context = 'talk') plt.style.use('dark_background') plt.rcParams['figure.facecolor'] = '#272b30' plt.rcParams['image.cmap'] = 'viridis' print('Python version:\n{}\n'.format(sys.version)) print('Numpy version:\t\t{}'.format(np.__version__)) print('matplotlib version:\t{}'.format(matplotlib.__version__)) print('Scipy version:\t\t{}'.format(scipy.__version__)) print('Seaborn version:\t{}'.format(sns.__version__)) Populating the interactive namespace from numpy and matplotlib Python version: 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:53:06) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] Numpy version: 1.11.2 matplotlib version: 1.5.3 Scipy version: 0.18.1 Seaborn version: 0.7.1 Step 1: Define the input parameters¶ Next, we need to define a few parameters that will determine the output of the simulations. These parameters are: - wavelength Units are $ \mu m $. - NA Numerical aperture of the system. No units. - pixelSize The length of a square pixel in the object space. Units are $ \mu m $. - numPixels The number of pixels in your camera. This will be assumed to be even. - power The total power carried by the field in Watts, $ W $. Note that pixel size is defined as the size of a pixel in the object space. Defining it like this is often more intuitive than defining the pixel size in the image space. For example, when you see a scale bar on a microscopy image, the distances correspond to object space distances, not the actual distance that the image spans on the piece of paper or computer screen. Furthermore, there is no problem with working with an object space pixel size in the image plane since the coordinate systems in the object and image planes of our perfect optical system can be easily mapped onto one another by a linear scaling by the system's magnification. We don't neccessarily need to use a camera as the detector. Since we are limited to working with discrete arrays in the computer, though, it's convenient to say that we have a camera as a detector since each pixel is a discrete sample of the field. In addition to the above parameters, we'll assume that the object, the imaging system, and the image plane are all in air. We'll define a constant $ Z_0 = 376.73 \, \Omega $ which is known as the impedance of free space or the vacuum impedance. This is the constant of proportionality between the power carried by the scalar field and the integral of its absolute square in the image plane:\begin{equation*} P_0 = \frac{1}{Z_0} \iint_{-\infty}^{\infty} \left| \text{PSF}_A \left( x, y \right) \right|^2 \, dx \, dy \end{equation*} Of course, air does not really have the same impedance as vacuum, but the two values are close enough. I have also used $ P_0 $ to denote the power because I already used $ P \left( f_x, f_y \right) $ to denote the pupil function. # Setup the simulation parameters wavelength = 0.68 # microns NA = 0.7 # Numerical aperture of the objective pixelSize = 0.1 # microns numPixels = 2048 # Number of pixels in the camera; keep this even power = 0.1 # Watts Z0 = 376.73 # Ohms; impedance of free space Step 2: Create the image and pupil plane coordinate systems¶ Our simulation will transform the values in a two-dimensional square array of complex numbers (the pupil function) into a new two-dimensional array of complex numbers of the same size. Before we do this, however, let's first determine the coordinates of each pixel. Since we specified the pixel size and the number of pixels, it's easiest to start with the image plane coordinates. We will define the origin of our coordinate system to lie at the center of the array, which, for an even number of pixels and array indexes that start at zero, lies halfway between the pixels $ \left( \frac{\text{numPixels}}{2} \right) - 1 $ and $ \left( \frac{\text{numPixels}}{2} \right) $ in both the horizontal and vertical directions. # Create the image plane coordinates x = np.linspace(-pixelSize * numPixels / 2, pixelSize * numPixels / 2, num = numPixels, endpoint = True) We only need to create a single, one-dimensional array to represent the coordinates because our image and pupil plane arrays will be square; we can use the same array to represent the coordinates in both the horizontal and vertical directions. With the image plane coordinates taken care of, the next question is: what are the pupil plane coordinates? This question is a frequent source of frustration for students (and full-time scientists). I won't go into the details in this post, but instead will just tell you the two rules you need to remember for Fourier optics simulations - The number of elements in the pupil function array is the same as the number of elements in the image plane array. - For an even number of array elements, the frequency values along either principle direction in the pupil function run from $ -\frac{f_S}{2} $ to $ f_S \left( \frac{1}{2} - \frac{1}{\text{numPixels}} \right) $ with the spacing between discrete coordinate values equal to $ \frac{f_S}{\text{numPixels}} $. $ f_S $ is called the sampling frequency and is equal to one divided by the spacing between image space coordinates. We can go ahead now and compute the frequency-space coordinate values. # Create the Fourier plane dx = x[1] - x[0] # Sampling period, microns fS = 1 / dx # Spatial sampling frequency, inverse microns df = fS / numPixels # Spacing between discrete frequency coordinates, inverse microns fx = np.arange(-fS / 2, fS / 2, step = df) # Spatial frequency, inverse microns Step 3: Create the pupil function¶ In nearly all imaging systems, the pupil is circular because its optical elements are circular. The radius of the pupil function is the ratio of the system's numerical aperture to the wavelength of the light, $ \frac{\text{NA}}{\lambda} $ (Hanser, 2004). Perfect systems like the one we are modeling here have a pupil with a constant value everywhere inside this circle and zero outside of it. We can simulate such a pupil by making a circular mask with a radius of $ \frac{\text{NA}}{\lambda} $. The mask is one inside the circle and zero outside of it. # Create the pupil, which is defined by the numerical aperture fNA = NA / wavelength # radius of the pupil, inverse microns pupilRadius = fNA / df # pixels pupilCenter = numPixels / 2 # assumes numPixels is even W, H = np.meshgrid(np.arange(0, numPixels), np.arange(0, numPixels)) # coordinates of the array indexes pupilMask = np.sqrt((W - pupilCenter)**2 + (H - pupilCenter)**2) <= pupilRadius Define the power carried by the scalar field¶ I mentioned in the theoretical background above that the total optical power carried by the field is the two dimensional integral of the absolute square of the field divided by the impedance. If we want to set the power as an input of the simulation, we need to first normalize our pupil values by this integral. Parseval's theorem tells us that we can integrate over $ \text{PSF}_A \left( x, y \right) $ and $ A \times P \left( f_x, f_y \right) $, i.e. the image plane field or the normalized pupil, respectively, and get the same number:\begin{equation*} \iint_{-\infty}^{\infty} \left| \text{PSF}_A \left( x, y \right) \right|^2 \, dx \, dy = A^2 \iint_{-\infty}^{\infty} \left| P \left( f_x, f_y \right) \right|^2 \, df_x \, df_y \end{equation*} Now that we have the pupil, we can perform a numerical integration over it using Simpson's rule to find the normalizing constant. We then multiply the pupil by the square root of this constant times our desired value for the power to set the total power carried by the field. # Compute normalizing constant norm_factor = simps(simps(np.abs(pupilMask)**2, dx = df), dx = df) / Z0 # A^2 print('Normalization constant:\t\t{:.4f} W'.format(np.sqrt(norm_factor))) # Renormalize the pupil values pupil = pupilMask * (1 + 0j) normedPupil = pupil * np.sqrt(power / norm_factor) new_power = simps(simps(np.abs(normedPupil)**2, dx = df), dx = df) / Z0 print('User-defined power:\t\t{:.4f} W'.format(power)) print('Power now carried by field:\t{:.4f} W'.format(new_power)) Normalization constant: 0.0940 W User-defined power: 0.1000 W Power now carried by field: 0.1000 W # Show the pupil ax = plt.imshow(np.abs(pupil), extent = [fx[0], fx[-1], fx[0], fx[-1]]) plt.grid(False) cb = plt.colorbar(ax) cb.set_label('$P \, ( f_x, f_y ) $') plt.xlabel('$f_x$, $\mu m^{-1}$') plt.ylabel('$f_y$, $\mu m^{-1}$') plt.show() # Compute the power power_pupil = simps(simps(np.abs(normedPupil)**2, dx = df), dx = df) / Z0 print('Power in pupil plane: {:.4f} W'.format(power_pupil)) Power in pupil plane: 0.1000 W Step 4: Compute the image of the point source¶ With the pupil and coordinate systems established, we are now ready to compute the image of the isotropic point source that this system produces. To do this, we need to perform a few easy but important steps. In the first step, we will shift the origin of the pupil from the center of the array to the array indexes $ \left( 0, 0 \right) $ using the ifftshift() function. The reason we do this is that fft2() expects that the zero frequency value lies at the origin of the array. The two-dimensional FFT of the shifted pupil is then computed, producing a new array with the zero frequency at array indexes $ \left( 0, 0 \right) $ and the Nyquist frequency $ f_S / 2 $ in the middle of the array's axes (numpy.fft.fft2 - NumPy v1.11 Manual, accessed on 2016-11-15). We finish by shifting the origin back to the center of the array using fftshift() so that it makes sense when we visualize the results. The final step is to multiply the result by the square of the spacing between frequency coordinates. This step ensures that the power is preserved during the FFT operation (Schmidt, 2010, pp. 15-18). Chaining the functions together, these steps look like: psf_a = fftshift(fft2(ifftshift(normedPupil))) * df**2 The image is the irradiance in the image plane, which is the absolute square of the field, divided by the vacuum impedance. Its units are power per area, or in this case $ W / \mu m^2 $. In optics, the irradiance is what is measured by a camera, not the field. image = np.abs(psf_a)**2 / Z0 psf_a = fftshift(fft2(ifftshift(normedPupil))) * df**2 image = np.abs(psf_a)**2 / Z0 # Show the image plane img = plt.imshow(image, interpolation='nearest', extent = [x[0], x[-1], x[0], x[-1]]) cb = plt.colorbar(img) plt.gca().set_xlim((-2, 2)) plt.gca().set_ylim((-2, 2)) plt.xlabel('x, $\mu m$') plt.ylabel('y, $\mu m$') cb.set_label('Irradiance, $W / \mu m^2$') plt.grid(False) plt.show() Above you can see the image of an isotropic point source. The image is not a point but rather a blurred spot in the center of the image plane due to diffraction at the pupil. power_image = simps(simps(image, dx = dx), dx = dx) print('Power in pupil plane: {:.4f} W'.format(power_pupil)) print('Power in object plane: {:.4f} W'.format(power_image)) Power in pupil plane: 0.1000 W Power in object plane: 0.1000 W So far so good. The next thing that we can do to verify these results is to calculate the sampled values of the analytical solution to this problem. Scalar diffraction theory predicts that the solution for the irradiance of the field diffracted by a circular aperture is an Airy disk:\begin{equation*} I \left( r \right) = I_0 \left[ \frac{2 J_1 \left( X \right)}{X} \right]^2 \end{equation*} where $X = \frac{2 \pi r \, \text{NA}}{\lambda} $, $ r = \sqrt{x^2 + y^2} $ is the radial coordinate, and $ J_1 \left( r \right) $ is called the first-order Bessel function of the first kind (Weisstein, Mathworld, accessed on 2016-11-16). This function does not exist inside Python's scientific libraries, so we will need to create it. from scipy.special import j1 as bessel1 def airyDisk(x,y, NA = 0.5, wavelength = 0.532): """Computes a 2D airy disk pattern. Parameters ---------- x, y : array of int, array of int Coordinates where the function will be evaluated. NA : float The system's numerical aperture. wavelength: float The wavelength of the light; same units as x and y. Returns ------- result : array of float """ r = np.sqrt(x**2 + y**2) X = 2 * np.pi * r * NA / wavelength result = (2 * bessel1(X) / X)**2 try: # Replace value where divide-by-zero occurred with 1 result[np.logical_or(np.isinf(result), np.isnan(result))] = 1 except TypeError: # TypeError is thrown when single integers--not arrays--are passed into the function result = np.array([result]) result[np.logical_or(np.isinf(result), np.isnan(result))] = 1 return result Now we can go ahead and visually compare our image plane calculations with the airy disk. If we subtract one from the other, we should get all zeros. from mpl_toolkits.axes_grid1 import make_axes_locatable # Subtraction by dx/2 places the origin at the edge of a pixel, not a center X, Y = np.meshgrid(x - dx/2, x - dx / 2, indexing = 'xy') fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (12, 8)) img = ax[0].imshow(image, interpolation='nearest', extent = [x[0], x[-1], x[0], x[-1]]) divider = make_axes_locatable(ax[0]) cax = divider.append_axes("right", size="5%", pad=0.05) cb0 = plt.colorbar(img, cax = cax) ax[0].grid(False) ax[0].set_xlim((-2, 2)) ax[0].set_ylim((-2, 2)) ax[0].set_xlabel('x, $\mu m$') ax[0].set_ylabel('y, $\mu m$') ax[0].set_title('Simulation') plt.grid(False) I0 = np.max(image) img = ax[1].imshow(I0 * airyDisk(X,Y, NA = NA, wavelength = wavelength), interpolation = 'nearest', extent = [x[0], x[-1], x[0], x[-1]]) divider = make_axes_locatable(ax[1]) cax = divider.append_axes("right", size="5%", pad=0.05) cb1 = plt.colorbar(img, cax = cax) ax[1].grid(False) ax[1].set_xlim((-2, 2)) ax[1].set_ylim((-2, 2)) ax[1].set_xlabel('x, $\mu m$') ax[1].set_title('Theory') cb1.set_label('Irradiance, $W / \mu m^2$') plt.tight_layout() plt.show() /home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:21: RuntimeWarning: invalid value encountered in true_divide # Find and plot the difference between the simulated image and the theoretical one I0 = np.max(image) diffImg = image - I0 * airyDisk(X,Y, NA = NA, wavelength = wavelength) plt.imshow(diffImg, interpolation = 'nearest', extent = [x[0], x[-1], x[0], x[-1]]) cb = plt.colorbar() cb.set_label('Irradiance, $W / \mu m^2$') plt.grid(False) plt.xlabel('x $\mu m$') plt.xlim((-2, 2)) plt.ylabel('y $\mu m$') plt.ylim((-2, 2)) plt.title('Difference between simulation and theory') plt.show() /home/kmdouglass/anaconda3/envs/nikola/lib/python3.5/site-packages/ipykernel/__main__.py:21: RuntimeWarning: invalid value encountered in true_divide From the plots above you can see that the simulations perform pretty well at finding the image of the point source. In fact, the differences in the final irradiance values differ by at most a small fraction of a percent. However, the pattern in the difference image is not random, so these differences are probably not round-off errors. In fact, they come from the minor but important detail that we are calculating a discrete Fourier transform with the FFT command, whereas the theory predicts that the point spread function is a continuous Fourier transform of the pupil. Final remarks¶ The calculations I discussed here rely on quite a few simplifying assumptions. The first is that they are based on scalar diffraction theory. The real electromagnetic field is polarized and is described not by a scalar but by a three dimensional, complex-valued vector. The theory of vector diffraction describes how such a field propagates through an optical system; naturally, it is more complicated than scalar diffraction theory. Isotropic point sources are also an idealization of reality. Single fluorescent molecules, for instance, do not emit light equally in all directions but rather in a pattern known as dipole radiation. Finally, the optical system as modeled here was a linear, shift-invariant system. This means that the same pupil function can actually be used to compute the image of any off-axis point source as well; we would only need to apply a linear phase ramp to the pupil to account for this. Real optical systems are only approximatley linear and shift-invariant, which means that a more complete model would assign a different pupil function to each object point. This then allows for aberrations that vary as a function of object position, but requires more information to fully describe the system.
https://kmdouglass.github.io/posts/simple-pupil-function-calculations/
CC-MAIN-2019-51
en
refinedweb
Server that executes RPC commands through HTTP. Dependencies: EthernetInterface mbed-rpc mbed-rtos mbed Import programHTTP-Server Server that executes RPC commands through HTTP. Overview This program is a small HTTP Server that can execute RPC commands sent through HTTP and sends back a reply. This reply can be either a line of text or an HTML page. HTTP Request The server will only read the first line of the HTTP header. It does not check that the header is correct and it does not attempt to read the body. Hence, the server is only interested in the type of the request and the path. Instead of requesting a file, the path contains the RPC command. RPC command encoding Information The RPC command must be encoded in a special way because no spaces are allowed in the path. Thus, the RPC command is encoded in this way : /<obj name or type>/<func name>?arg1_name=arg1_val&arg2_name=arg2_val or, if the RPC command does not have any arguments : /<obj name or type>/<func name> So, a complete URL might be : The name of the arguments does not appear in the final RPC command. So these 3 urls will do exactly the same thing : Also, the order of the arguments are preserved. Request handlers To process requests, the server relies on RequestHandler. Each RequestHandler is assigned to a request type. Each type of request is assigned to a certain role : - PUT requests to create new objects - DELETE requests to delete objects - GET requests to call a function of an object However, there is a RequestHandler that accepts only GET requests but it can create/delete/call a function. This was necessary to provide an interactive web page that allows creation and deletion of objects. Reply The reply depends on the formatter. Currently, three formatters are available : - The most basic one does not modify the RPC reply. Hence, if you consider sending request from python scripts, this formatter is the most appropriate one. - A simple HTML formatter will allow the user to view the RPC reply and a list of RPC objects currently alive from a browser. - Finally, a more complex HTML formatter creates an entire web page where the user can create and delete an object as well as calling functions on these objects. Configure the server The configuration of the server consists on choosing the formatter and adding one or more request handlers to the server. The main function initializes the server in order to produce HTML code and to receive data only using GET requests. If you want to use a simpler and different version of the server you can change the content of the main function (located in main.cpp) by this code : main RPCType::instance().register_types(); EthernetInterface eth; eth.init(); eth.connect(); printf("IP Address is %s\n", eth.getIPAddress()); HTTPServer srv = create_simple_server(); if(!srv.init(SERVER_PORT)) { printf("Error while initializing the server\n"); eth.disconnect(); return -1; } srv.run(); return 0; However, this configuration will not work with the following examples. Examples I assume that the server is using the InteractiveHTMLFormatter (which should be the case if you did not make any changes). Using a browser Here is a quick guide how to run this program : - Compiles this program and copies it to the mbed - Open TeraTerm (install it if you don't have it), select serial and choose the port named "mbed Serial Port" - Reset your mbed - The IP address should appear in teraterm. In this example, I will use 10.2.200.116. - Open your browser and go to. - If everything is ok, you should see a webpage. Now, let's switch on a led. First, we need to create an object to control a led : Then, let's write an RPC command to switch on led : Using python This program creates and switches on led2. Sending RPC commands over HTTP with Python import httplib SERVER_ADDRESS = '10.2.200.38' h = httplib.HTTPConnection(SERVER_ADDRESS) h.request("GET", "/DigitalOut/new?arg=LED2&name=led2") r = h.getresponse() print r.read() h.request("GET", "/led2/write?arg=1") r = h.getresponse() print r.read() h.close() Of course, you might have to change the server address in order to make it work. HTTPServer.cpp@10:8b4c3d605bf0, 2013-07-18 (annotated) - Committer: - feb11 - Date: - Thu Jul 18 10:10:14 2013 +0000 - Revision: - 10:8b4c3d605bf0 - Parent: - 9:a9bf63017854 Improved javascript code
https://os.mbed.com/users/feb11/code/HTTP-Server/annotate/8b4c3d605bf0/HTTPServer.cpp/
CC-MAIN-2019-51
en
refinedweb
#include <ss_ss2pi_base.g.hh> Inheritance diagram for lestes::lang::cplus::sem::ss_expr2pi: Generated constructor for class ss_expr2pi. Generated constructor for class ss_expr2pi. The method caller_get returns the value of the field ss_expr2pi::caller. The method caller_set sets the field ss_expr2pi::caller to the given value. The method result_get returns the value of the field ss_expr2pi::result. The method result_set sets the field ss_expr2pi::result to the given value. The method psp_get returns the value of the field ss_expr2pi::psp. The method psp_set sets the field ss_expr2pi::psp to the given value. The method temporaries_get returns the value of the field ss_expr2pi::temporaries. The method temporaries_set sets the field ss_expr2pi::temporaries to the given value. The method expr2operand_map_get returns the value of the field ss_expr2pi::expr2operand_map. The method expr2operand_map_set sets the field ss_expr2pi::expr2operand_map to the given value. The method returned_pointer_get returns the value of the field ss_expr2pi::returned_pointer. The method returned_pointer_set sets the field ss_expr2pi::returned_pointer to the given value. Check whether given expression was yet transformed. Check whether given expression was already transformed. In such a case result is set to according result of evaluated subexpression and returned TRUE. Returning value from transforming expression visitor. Returning value from transforming expression visitor. Returning value from transforming expression visitor. Transform given expression and check whether pointer for dereference is returned. Used by (vol)get and assign for identification of dereferenced pointer in operand,which cause load/store via returned pointer. This can happen via dereferience of ss_pointer or via var_ref of variable with ss_reference type. Assignment transformation. It distinguishes between store and store via pointer dependently on returned result from subexpression. pointer on destination type Implements lestes::lang::cplus::sem::ss_expression_visitor. Conversion between types. Note, that frontend products conversions, where source type is equal destination type. These cases must be properly treated. Implements lestes::lang::cplus::sem::ss_expression_visitor. Up to now, there is nothing to be done for array->pointer conversion. Implements lestes::lang::cplus::sem::ss_expression_visitor. Bind reference. It can occurs in initialization, function parameter passing and return value store. Its input value shall be lvalue - in case of dereference on pointer, only pointer shall be returned, in case of variable reference, its address shall be loaded. Implements lestes::lang::cplus::sem::ss_expression_visitor. lvalue->rvalue conversion or indirect load via pointer pointer on source type Implements lestes::lang::cplus::sem::ss_expression_visitor. lvalue->rvalue conversion or indirect load via pointer on volatile variable pointer on source type Implements lestes::lang::cplus::sem::ss_expression_visitor. Load address of a given object (&x). Only lvalue can be the argument of this operator. Implements lestes::lang::cplus::sem::ss_expression_visitor. This time, templates should be instantiated. Implements lestes::lang::cplus::sem::ss_expression_visitor. Load proper pi_mem_factory for a given declaration. In case of & references, which are treated as pointers, pointer is read and flag for dereference is set. For referenced variables origin has to be computed, to determine correct data flow (e.g. in a=1+a, where a is used as both a load's source and a store's target). Implements lestes::lang::cplus::sem::ss_expression_visitor. First generated factory method for class ss_expr2pi. This factory method for class ss_expr2pi takes values of all fields as arguments. Second generated factory method for class ss_expr2pi. This factory method for class ss_expr2pi uses initializers. "visit-return" method Marking routine for class ss_expr2pi. Marking routine is used for garbage collection. Reimplemented from lestes::lang::cplus::sem::ss_expression_visitor. Nearest statement visitor caller(needed for variable access). Result of evaluating current expression. Usually pseudoregister, where the result is stored. For lvalues appropriate memory placeholder(pi_mem) is stored. For void funcalls NULL is stored. Previous full-expression sequence point. It points to the psp of the fullexpression (or the whole expression of the initializer). Used for stop mark of pi_mem origin computation - only side effects inside the whole expression are significant. Destructor table for temporary variables. Temporaries which rise, when evaluating exression. They will be destructed, after evaluating the whole expression(i.e. the whole statement) Map for result of subexpressions inside fullexpression. It prevents of reevaluation of already evaluated subexpressions. Used also for determination of memory destination of side effects. Temporary variable used by is_returned_pointer method. Reimplemented from lestes::lang::cplus::sem::ss_expression_visitor.
http://lestes.jikos.cz/uml/classlestes_1_1lang_1_1cplus_1_1sem_1_1ss__expr2pi.html
CC-MAIN-2019-51
en
refinedweb
Introduction:I am very happy to write an article about FTP server access from windows phone programming using c#.net, there are no direct APIs available for leveraging FTP services in a Windows phone app. From an enterprise perspective, this makes it rather difficult for employees to access files over their phones. Fortunately we have socket support for windows phone to communicate with a multitude of different services, making users feel more connected and available than ever. This article can explained about below topics: 1. What is FTP and Why? 2. How to Setup Ftp Server in Windows 8.0/8.1/10 OS system? 3. How to Connect FTP Server from WindowsPhone? 4. Upload files to Ft: 1. What is FTP and Why? -. This section is not applicable for windows app developer, but it is better to learn understand the FTP communication by creating server in your machine. 2.1. So open Programs and Features in Control panel. Click on ‘Turn Windows features on or off’ as shown below. If IIS was not installed earlier on particular Windows 8 or 8.1 computer, you need to install other features of IIS too ( as shown by arrow marks). See the below screenshot for the actual requirements to run FTP server on Windows 8/8.1 (All features which are ticked need to be installed). Note: You must be restart your machine for above step to make changes in your . 2.2. After installation is completed, search and open ‘Internet Information Services (IIS) Manager’ in Administrator mode from start menu. 2.3. Give a name for FTP site and browse the local folder which you need to give access to others through server.(I already created a folder called 'FTPShare' on C drive before reaching this step.) 2.4. In next screen you need to select the local computer’s IP address from drop down box. I hope you have already set up static IP for the computer. Under SSL option, select No SSL to make the connection without SSL certificate. In production environment for professional FTP server setup, you may need to enable SSL which requires a certificate. Now its time to Set Up FTP Access Permission on Windows 8 or 8.1 or Windows 10 2.5. In next screen you can set the permission for users to access the FTP site. Here you need to decide how others will be accessing the FTP share and who will be having Read-only or Read & Write access. Let’s assume this scenario, you want specific users to have read and write access, so obviously they must type an user name and password for it. Other users can access the FTP site without any username or password to view the content only, it’s called anonymous users access. You will find several ways to do this, but here is the simple way which worked for me. Firstly open start menu and type 'lusrmgr.msc', and create a user on Windows 8 or Windows 10 local computer (if you are not using active directory environment) . and add the users who will be having read and write access on FTP site. In this example I have created ‘ftpusers’ group and added required users inside it. 2.6. Now, let’s continue the FTP site settings. In next screen to give the permission, select ‘Basic’ which will be prompting for user name and password, select Specified roles or groups option from drop down and type the correct group name. Set Read and Write permission for the group. Press Finish to complete the setup. Check the Firewall ! It should allow FTP traffic. We are almost done and completed the all necessary steps. Now, you need to either disable the firewall or allow FTP inbound and outbound traffic on Windows 8 or 8.1 computer. I hope you can do it easily. 3. How to Connect FTP Server from WindowsPhone? Step 1: 1. Open Microsoft Visual Studio Express 2013 for Windows (or) later. 2. Create new silverlight project using the "Blank App" template available under Visual C# -> Store Apps -> Windows Phone Apps. (for example project name :WindowsPhoneFTP) Step 2: Recently i made one binary file for FTP on windows phone programming, and temporarily now i can directly add ftp binary file to reference folder to access FTP server using windows phone 8.0 silverlight C#. Step 3: Now add below helper class for uploading files to FTP server. - public class FtpClientClass - { - private static string m_Host = "Please ENTER your Server Name/IP"; - private static string FtpUserName = "Server UserName"; - private static string FtpPwd = "Server Password"; - private static string FtpPort = "21"; - private readonly NetworkCredential m_Credentials = new NetworkCredential(FtpUserName, FtpPwd); - - private const string PathToSet = "/"; - - private const string FolderToCreate = "";//"WindowsPhone"; - public FtpClient CreateFtpClient() - { - - var ftpClient = new FtpClient(); - ftpClient.Credentials = m_Credentials; - ftpClient.HostName = new HostName(m_Host); - ftpClient.ServiceName = FtpPort; - - return ftpClient; - } - - //Upload file to Ftp Server - public async Task<string> UploadFileAsync(string DataFields, string fileUsername) - { - try - { - if (fileUsername=="") - { - fileUsername = "NoUserName"; - } - System.Diagnostics.Debug.WriteLine("-----------------------------------"); - System.Diagnostics.Debug.WriteLine("TestCreateFileAsync"); - System.Diagnostics.Debug.WriteLine("-----------------------------------"); - - var ftpClient = CreateFtpClient(); - - await ftpClient.ConnectAsync(); - await ftpClient.CreateDirectoryAsync(PathToSet + FolderToCreate); - string testFilePath = PathToSet + fileUsername + ".txt"; - - using (var stream = await ftpClient.OpenWriteAsync(testFilePath)) - { - var bytes = Encoding.UTF8.GetBytes(DataFields); - - await stream.WriteAsync(bytes.AsBuffer()); - await stream.FlushAsync(); - } - //await ftpClient.DisconnectAsync(); - return "Success"; - } - catch(Exception ex) - { - return ex.Message; - } - } - } Now we can use above helper class like below: FTP" Background="#FF58E277" Click="BtnSubmit_Click"/> - <TextBlock Name="TbckUploadStatus" Grid. - </Grid> - </Grid> Output: If you want to access SFTP server from windows phone silverlight, you can read it on next article from here. :) Hi, I am getting below error Metadata file .. Upload files to Ftp Server ..\WindowsPhoneFTP\Bin\Debug\WinPhone' could not be found Hi bro, I download the library from "Code_Gupta.WPFTP" and fix the issue. Hello Sir, Good Day! Can you show me how you do it in step 2? thanks Could you please download source code and find "WinPhone" inside Bin\Debug folder. Once you find, please add it manually to your project reference.
http://bsubramanyamraju.blogspot.com/2015/12/windowsphone-silverlight-80-81-upload.html
CC-MAIN-2017-34
en
refinedweb
When building your application there's a lot to worry about, from choice of framework and stack to designing the application itself, to questions of when to worry about scalability. Your database shouldn't have to be one extra layer of concern. You should be able to put data in, trust it will stay safe, and finally get data back out – in a performant manner. Yet, with all the moving parts in your application, understanding issues in your database can be a huge pain. Today we're releasing pg-extras, a heroku toolbelt plugin, to provide additional insights into your database and to make working with your database easier. $ heroku plugins:install git://github.com/heroku/heroku-pg-extras.git Now you have many more commands available to provide you the insight you need within the pg namespace of the heroku toolbelt: $ heroku help pg ... pg:bloat [DATABASE] # show table and index bloat in your database ordered by most wasteful pg:blocking [DATABASE] # display queries holding locks other queries are waiting to be released pg:cache_hit [DATABASE] # calculates your cache hit rate (effective databases are at 99% and up) ... You can read more on each command available and what insights you can get from it within the pg-extras readme. Lets highlight a few: cache_hit this RAM for its cache. You can read more about how this works in our article on understanding postgres data caching. As a guide for most web applications cache hit ratio should be in the 99%+ range. $ heroku pg:cache_hit name | ratio ----------------+------------------------ index hit rate | 0.99985155862675559832 cache hit rate | 0.99999671620611908765 (2 rows) index_usage Premature optimization has both the cost of losing time on feature development and the risk of wasted optimizations. Indexes are one area that’s easy to ignore until you actually need them. A good index should be across a table of some reasonable size and highly selective. However indexes aren’t free, as there is a measurable cost to keeping them updated and storing them, so unused indexes which you can see with heroku pg:unused_indexes are to be avoided. With pg:index_usage you can begin to get a clear idea of how to manage/maintain your indexes. Running the command will give you output like the following: $ heroku pg:index_usage --app dashboard relname | percent_of_times_index_used | rows_in_table ---------------------+-----------------------------+--------------- events | 65 | 1217347 app_infos | 74 | 314057 app_infos_user_info | 0 | 198848 From the above you can see that the app_infos_user_info has never had an index used and could likely benefit from adding an index. Even the events table could benefit from some additional indexes. From this you could then follow a more detailed guide for getting your indexes setup. locks Locks are bound to happen within your database, usually these are very short lived on the order of milliseconds. In PostgreSQL fortunately writing data does not hold a lock preventing it from being read. However, you can still encounter unintentional cases where you have long lived locks due to contention. Such cases can create follower lag, cause issues for other queries and in general start to impact application performance. With the pg:locks command you can easily view all current locks and how long they've been held. kill Whether its lock contention, a long running analytics query, or a bad cross join there are times where you want to stop a query. The pg:kill statement will allow you to kill any currently running query by specifying its pid which is displayed with commands pg:locks and pg:ps. Or if your database is in dire straights you also have the abilty to run pg:killall to kill all currently running queries. Having the ability to stop runaway queries will allow you to feel even safer when others need access to your database to get the reports they need. The future We've already found pg-extras incredibly useful and expect you will too. Going forward pg-extras will be the playground for new commands available power users that install the plugin. Over time some commands may leave pg-extras and become part of the toolbelt or be removed if they’re not beneficial. We welcome your input on which commands you find helpful or what else you’d like to see in pg-extras at postgres@heroku.com.
https://blog.heroku.com/more_insight_into_your_database_with_pgextras
CC-MAIN-2017-34
en
refinedweb
This is the kind of problem we use dynamic programming. WHY? it's very challenging to figure out what's the pattern of optimal burst order. In fact, there's no clear rule that makes sense. Shall we burst the balloon with maximum coins? Or shall we burst the one with least. This is the time we introduce Dynamic Programming, as we want to solve the big problem from small subproblem. It is clear that the amount of coins you gain relies on your previous steps. This is a clear signal of using DP. The hard part is to define the subproblem. Think out what is clear in this problem? Let's scale this problem down. What is the fact you know for sure? Say if the array has only 1 balloon. The maximum coin would be the coin inside this ballon. This is the starting point! So let's move on to array with 2 balloons. Here, we have 2 cases, which of the balloon is the last one. The last one times the coins in boundary is the gain we get in the end. That is to say, last balloon is the key. Since we don't know the pattern of optimal. We just blindly iterate each balloon and check what's total gain if it's the last ballon. Let's use dp[i][j] to denote maximum gain from balloon range i to j. We try out each balloon as last burst in this range. Then the subproblem relation would be: foreach k in i to j: dp[j][i] = max(array[j-1]*array[k]*array[i+1] + dp[j][k-1] + dp[k+1][i], dp[j][i]); public class Solution { public int maxCoins(int[] nums) { // Extend list with head and tail (both are 1), index starts at 1 int array[] = new int[nums.length + 2]; array[0] = 1; array[array.length-1] = 1; for (int i = 0; i < nums.length; i++) { array[i+1] = nums[i]; } // Initialize DP arrays, 1 index based int dp[][] = new int[array.length][array.length]; //dp[i][j] stands for max coins at i step, burst j for (int i =0; i < array.length; i++) { for (int j = 0; j < array.length; j++) { dp[i][j] = 0; } } for (int i=1; i< array.length-1; i++) { for (int j=i; j >=1; j--) { // k as last for (int k=j; k <= i; k++) { dp[j][i] = Math.max(array[j-1]*array[k]*array[i+1] + dp[j][k-1] + dp[k+1][i], dp[j][i]); } } } return dp[1][array.length-2]; } }
https://discuss.leetcode.com/topic/38842/java-solution-with-explanations
CC-MAIN-2017-34
en
refinedweb
Announcements vovansimMembers Content count207 Joined Last visited Community Reputation336 Neutral About vovansim - RankMember [.net] Playing Many Videos vovansim replied to joelmartinez's topic in General and Gameplay Programmingjoelmartinez, I have worked on a similar project, although it was different in that we were limited to about four or five videos, but it was to run on a mobile device. Now, you say you would run this on a top of the line computer, so of course you have far far more hardware to work with, yet I think 60 videos is a goal ambitious enough to where you might run into the same kind of problem as I did. Something I ran into pretty much right off the bat was the fact that actually rendering the videos is the easy bit. ;) What takes a lot of time is decoding the video from the file, so I would suggest dedicating some time to research in this area. (Google seems to have plenty of answers for the kind of stats you would need.) Of course the videos would necessarily be pretty small. 60 videos at a time means about a 8 x 8 grid of pictures, so on 1600 x 1200 monitor, that would be the picture size roughly of 160 x 120, if you want them on screen all at the same time. That's pretty tiny, and if the original video you are reading from is that resolution, then maybe decoding won't be that bad. You'd need to run some tests I guess. Also, taking in all those videos all at the same time might be a bit hard. I am no HCI expert, but I would *guess* when you are looking at a grid that big, you probably can't see more than the 3x3 pictures around the point you are looking at. So maybe one curious experiment to run would be to track mouse motion, and only play the 9 videos in a square around the one the mouse is pointing at. Then, the user would follow where he's looking with the mouse, and only the videos he can take in at a time will be playing - the others would just be still images. That might look pretty cool. Of course, then, it might look like ass. ;) [.net] Simple and Efficient Log Parsing? vovansim replied to vovansim's topic in General and Gameplay ProgrammingThanks for the suggestions, guys. Basically what y'all are saying boils down to writing my own simplified parser. That's an option, of course, albeit not one I am altogether comfortable with. Thanks for the feedback any way. [.net] Simple and Efficient Log Parsing? vovansim posted a topic in General and Gameplay ProgrammingThe situation, in a nutshell, is as follows. I have a third-party application (not written in .NET) that I need to write a certain tool for (which I am writing in .NET). In order to do this, I need to be able to parse the application's log file. So, what I need is hopefully a simple, but reasonably flexible, and hopefully efficient way to parse the log file. Now, obviously, the log file is generated by the program, so there is a limited number of possible message formats. That's good. However, the number of different message types is a bit large. That's a bit bad. Also, some message formats are sort of mixed. That's pretty bad. So here's what I've been through so far. First, I thought, well, this ain't so bad. I'll just make a Regex for every type of message there is, get the parameters from named groups in the match, and be done with it. It was all good until I got to about 5 of those and realized I wasn't nearly half way through the message types yet, and the function that picked the regex and figured out which matched was getting quite long. So then, I decided, ok, we'll generalize this a wee bit. So I made a class which takes in regex-delegate pairs, and then automatically goes through all the expressions it knows about, and whichever matches, it automatically calls it's correespondent delegate. This seemed like a very nice solution until I ran into these mixed format messages. Basically, we can have a message like: Application loaded module A Application couldn't load module B Or we can have messages like: Application finished successfully Application finished with error #blah Or we can have mixed messages like: Application finished with error #blah, because it couldn't load module B So at first, I was like, whatever, I'll just try to cover every possible case, how hard could it be? Well, it might not be too hard, but it sure as hell is tedious, and I'm afraid to even think what happens when the aforementioned third party decides to change their application, and changes the log file format in the process. It just seems like a bad thing to do in general. So then my thought was to use some sort of scanner and parser generator. I got myself ANTLR, and played around with that. However, it almost seems to be overkill to use it to parse the log one line at a time, and the code it generates is a wee bit slow for my needs (not to mention, it kinda sucks to use a java-based application to generate my .NET code). So... I was wondering if someone knows of some lightweight framework for simple parsing. I would appreciate any suggestions here. :) Most ridiculous line of code you have seen? vovansim replied to Paradigm Shifter's topic in GDNet LoungeUgliest line of C# I've ever seen: string[] parts = someString.Split(new String(@"\")[0]); Most evil line of C I've ever seen (this was a prank played on a friend): fclose(stdout); Most evil line of C code I want to see (as a future prank played on a friend): #define sizeof(x) rand() ;) Vovan [.net] operator -= why not overloadable ? vovansim replied to UnshavenBastard's topic in General and Gameplay ProgrammingQuote:Original post by iMalc Quote:Original post by UnshavenBastard Maybe I should start a petition, collecting signatures, to Anders Hejlsberg that he makes -= and += etc overloadable in the next version? ;-)I'm curious; What would be your reasoning? I believe his reasoning is simple: if we overload operator-, it needs to return a new instance of vector. However, with the operator-=, you operate on the object to the left in place, hence saving a memory alloc and making code a tiny bit more efficient. I would really like to see how much of a difference this actually makes, but that'd be my thinking on this one. Vovan C# Flashing Problem vovansim replied to yadango's topic in For BeginnersWell, debugging it with a break-point is kind of hard, because it will fire at the slightest mouse move, and this shaking behavior manifests itself best with large moves. (Incidentally, conditional breakpoints would actually work nicely here. Stick one in there with the condition (dx < -100 || dx > 100).) yadango, if you put a Console.WriteLine("{0},{1}", dx, dy); after you get your deltas, it's quite illuminating. The mouse motion event fires twice, with opposite deltas. What I suspect happens is that when you move the control, the new mouse position over it registers, and hence, the new mouse move event is fired, this putting the control back where it was, which causes a new mouse event to fire, etc ad infinitum, until you let go of the button. The quick and dirty solution is to unregister the OnMouseMove event handler before this.SuspendLayout(), and reregister it after this.ResumeLayout(). A better solution would be to register to listen for the Move event as well, and handle the this.xPos, and this.yPos for the mouse there, so that when the new mouse move event fires, it doesn't send the control flying back to where it was before. Vovan [.net] mouse & DirectX.DirectInput vovansim replied to p997's topic in General and Gameplay ProgrammingWell, the thing about DirectInput is, it doesn't know or care where the mouse is. :) This is fine for games where you use the mouse without a cursor (ie aiming in FPS). This is not fine for places where you need a cursor (ie main menu GUI). So, if you do care about where the mouse is, you have to keep track of it yourself. :) See these resources for some help on how to accomplish that: Developing a GUI Using C++ and DirectX Moving Your Game to Windows, Part II: Mouse and Joystick Input <-- Really old article, so code is pretty useless, but it goes over the concepts. Hope that helps, Vovan C# - Removing a share from a directory vovansim replied to furiousp's topic in General and Gameplay ProgrammingFurious, To make the long story short, Delete is an instance method, which means you need to get your hands on the ManagementObject representing the share, and then invoke the delete method on that. Then the share that the management object represents will be killed. I threw together a quick (incomplete and inefficient) wrapper class for Win32_Share, which you may find below, along with a piece of testing code, which seems to work fine. using System; using System.Collections.Generic; using System.Text; using System.Management; namespace SharingTest { class Win32Share { public enum MethodStatus: uint { Success = 0, //Success AccessDenied = 2, //Access denied UnknownFailure = 8, //Unknown failure InvalidName = 9, //Invalid name InvalidLevel = 10, //Invalid level InvalidParameter = 21, //Invalid parameter DuplicateShare = 22, //Duplicate share RedirectedPath = 23, //Redirected path UnknownDevice = 24, //Unknown device or directory NetNameNotFound = 25 //Net name not found } public enum ShareType: uint { DiskDrive = 0x0, //Disk Drive PrintQueue = 0x1, //Print Queue Device = 0x2, //Device IPC = 0x3, //IPC DiskDriveAdmin = 0x80000000, //Disk Drive Admin PrintQueueAdmin = 0x80000001, //Print Queue Admin DeviceAdmin = 0x80000002, //Device Admin IpcAdmin = 0x80000003 //IPC Admin } private ManagementObject mWinShareObject; private Win32Share(ManagementObject obj) { mWinShareObject = obj; } #region Wrap Win32_Share properties public uint AccessMask { get { return Convert.ToUInt32(mWinShareObject["AccessMask"]); } } public bool AllowMaximum { get { return Convert.ToBoolean(mWinShareObject["AllowMaximum"]); } } public string Caption { get { return Convert.ToString(mWinShareObject["Caption"]); } } public string Description { get { return Convert.ToString(mWinShareObject["Description"]); } } public DateTime InstallDate { get { return Convert.ToDateTime(mWinShareObject["InstallDate"]); } } public uint MaximumAllowed { get { return Convert.ToUInt32(mWinShareObject["MaximumAllowed"]); } } public string Name { get { return Convert.ToString(mWinShareObject["Name"]); } } public string Path { get { return Convert.ToString(mWinShareObject["Path"]); } } public string Status { get { return Convert.ToString(mWinShareObject["Status"]); } } public ShareType Type { get { return (ShareType)Convert.ToUInt32(mWinShareObject["Type"]); } } #endregion #region Wrap Methods public MethodStatus Delete() { object result = mWinShareObject.InvokeMethod("Delete", new object[] { }); uint r = Convert.ToUInt32(result); return (MethodStatus)r; } public static MethodStatus Create(string path, string name, ShareType type, uint maximumAllowed, string description, string password) { ManagementClass mc = new ManagementClass("Win32_Share"); object[] parameters = new object[] { path, name, (uint)type, maximumAllowed, description, password, null }; object result = mc.InvokeMethod("Create", parameters); uint r = Convert.ToUInt32(result); return (MethodStatus)r; } // TODO: Implement here GetAccessMask and SetShareInfo similarly to the above #endregion public static IList<Win32Share> GetAllShares() { IList<Win32Share> result = new List<Win32Share>(); ManagementClass mc = new ManagementClass("Win32_Share"); ManagementObjectCollection moc = mc.GetInstances(); foreach (ManagementObject mo in moc) { Win32Share share = new Win32Share(mo); result.Add(share); } return result; } public static Win32Share GetNamedShare(string name) { // Not a very efficient implementation obviously, but heck... This is sample code. ;) IList<Win32Share> shares = GetAllShares(); foreach (Win32Share s in shares) if (s.Name == name) return s; return null; } } class Program { static void Main(string[] args) { // Create a share first: Win32Share.MethodStatus result = Win32Share.Create("C:\\temp", "Temp", Win32Share.ShareType.DiskDrive, 10, "My Temp Folder", null); if (result == Win32Share.MethodStatus.Success) { Console.WriteLine("Yay, we created a temp share!"); Win32Share tempShare = Win32Share.GetNamedShare("Temp"); Win32Share.MethodStatus deleteResult = tempShare.Delete(); if (deleteResult == Win32Share.MethodStatus.Success) { Console.WriteLine("Yay, we deleted temp share!"); } } } } } Advice on documentation of software builds? vovansim replied to dli200608's topic in General and Gameplay ProgrammingWell, I guess the first question to ask yourself when writing anykind of document like that (and an answer to which will help us help you [smile]) is "Who is the intended audience?" I know this can be kinda vague and everybody says it, so it loses its impact a bit, but yeah, it's important. So the first thing we want to figure out is this: is this document intended for internal or external consumption? Is the build tool going to be distributed to someone outside the company along with the tool itself? And if so, is it going to be distributed as a standalone tool, or just a thing to help them build the main piece of software that your company sells? Either way, with anything that's going to be used outside the company, you want to put as many details about the operation of the tool as possible, including caveats and anything extraordinary one needs to make it work. I suspect, however, this tool if for internal use only. In this case, you still need to consider: is this document for developers that are only going to use it? Or maybe at some point they might want to look at and modify the code of the tool itself for some reason? In the latter case, you might want to not only document the way the tool woks, but also talk about how it is implemented. In general, when I write a document like that, I put first a little overview section which lists what the tool does and a little about how it does it, and then one section for every major bit of fucntionality, including a detailed description as to what's going on, and any gotchas the user may encounter, and then maybe a separate section for caveats that didn't fit anywhere in particular. So, for example, if I were writing a document for an imaginary tool, I would first put a section of overview. In there I would say something like "The such-and-such tool is used to build software and package it for distribution to customers. It takes in a directory that is the root of where the project to be built lives and attempts to compile all the source files it recursively finds. Once the compilation succeeds, the tool will collect all the project's resource files and package them into a distributable archive. Once all the resources are collected, the tool generates an MSI installer ready to be dsitributed to the customers, which contains all the necessary executables, resources, and configuration." Ya know. Something like that. Adapt to your needs, and expand a bit, like what the heck is an installer. ;) Now, you got yourself an overview like that. Read over it and figure out what are the major steps. If you read my fictitious overview, you will notice there are four major bits of functionality in the tool: analyzing the directory structure to identify source files and resources, compiling sources, packaging resources, and generating an installer. So I would put a separate section in the document for each of those. The analyzing directory structure bit could like something like this: "The tool expects a directory command-line parameter at startup. This parameter will be used to locate the project to be distributed. If the directory pointed to by this path does not exist, the tool will quit with an error (error description goes here). Once the directory is located, the tool will recursively go through each file in this directory and its child directories. It will consider all .java, .c, .h, .cpp, and .hpp files to be source files, and they will be included in the compilation process. It will consider all .txt, .gif, and .help files resources, which will be packaged for release with the executable. Files with other extensions will be ignored." Notice here I would list any error that can be encountered during this stage of the execution of the tool, and if it's not obvious, explicitly say what may be a possible solution. Like, as above, if the directory doesn't exist, it's kind of obvious that you have to... you know... provide a valid directory. ;) However, in some tools I've seen, they might rely on some sort of environment variable. So, say our imaginary tool expected an environment variable telling it which version of the java compiler to use. Then definitely stick that into the section on the compilation process, and tell the user what the error might be if the environment variable is missing, how to fix this, and what to do if they can't set environment variables (contact sys admin, blah blah blah). Also, this reminded me, if the tool requires special start-up procedures, like it requires (or even supports) a bunch of startup command-line parameters, it would probably be a good idea to dedicate a separate section to enumerating the possible parameters, what kind of values they need, what happens if they are not provided, and such. Hmmm... What else... Right, so if there is anything at all quirky about the tool, make sure to mention it. Like did you just get it and it magically worked on your computer? I bet you had to do something to make it work. ;) List it. And in general, if you are in doubt whether something is obvious enough to be ignored in the document... It isn't. ;) First of all, remember there are people out there who aren't as smart as you, so if something is obvious to you, it may not be to them. Secondly, and even more importantly, recognize that often times things only seem obvious to you because you've looked at them for a long time. Someone who hasn't seen the tool before will be easily confused. Finally, if there is any kind of development involving this tool envisioned for the future, like some sort of plug-in system to support other compilers, make sure to write about how one might go about developing something like that. Wow... I've just written a document about writing documents. :) Let me tell you: once you've written a few documents thinking just about how to make 'em long enough to satisfy the superior, once you have that experience... You'll be praying for the ability to write concisely, instead of producing long-wided discussions that are too huge for anyone to read without going insane. ;) ... Hope that helps. :D -- Vovan [.net] Isotope Isometric Engine C# version using SDL.NET released vovansim replied to Simon Gillespie's topic in General and Gameplay ProgrammingSimon, Generally, these kinds of posts go into the Your Announcements forum, and cross-posting is not encouraged. Regarding the engine itself, first of all, good job getting it done. ;) I'll definitely check it out. -- Vovan Could every game be made of tiles ? [ 2D-wise ] vovansim replied to Ey-Lord's topic in General and Gameplay ProgrammingQuote:Original post by medevilenemy that isn't really true. Each 'tile' could represent an object, modeled by a set of relevant sprites. There could simply be a destruction sprite animation. Which is why I said inconvenient, not impossible. :) I still believe that for instance for Scorched Earth it would be very inconvenient to do this. Recall all the weapons it has. There are weapons that blast the soil away, burn it off, and of course, create more soil. Recall also that things like moles (IIRC that's what they were called) would dig into the soil and explode underground - then the terrain above would fall down to fill the gap. If this was done with tiles bigger than one pixel in size, it would be a major pain in the ass, as for every weapon you create, you would have to handle a bundle of special cases with the terrain. And if, OTOH, every tile you have is one pixel in size, well... To me then they aren't really tiles any more. ;) -- Vovan [web] Review My Site! vovansim replied to Gin's topic in General and Gameplay ProgrammingGin, It looks good, except for a couple things: 1. Like someone said, what is up with the radio button on the front page? It bothers me when I see a form control and I don't know what it does. :) 2. More seriously, in your image submit form is a hidden input called "MAX_FILE_SIZE". Now, I haven't checked this, but I am guessing you use this to check against the upload size and disallow files too big. I want to point out that I could write my own form, set MAX_FILE_SIZE to a huge number, and submit that to your script to work around this "restriction". Now, you may have addressed this somehow, in which case that's ok, but in general it is not safe to get data that affects the stability of your server from a request provided by the user. Vovan Could every game be made of tiles ? [ 2D-wise ] vovansim replied to Ey-Lord's topic in General and Gameplay ProgrammingQuote:Original post by Ey-Lord Can you think of games 2D that cannot be made under some kind of tile system ? Consider a game like Scorched Earth, or the original Worms, or Lemmings, or Soldat. Since the terrain is destructible, it might be sort of unnatrual and a bit inconvenient to use standard tiles, IMO. -- Vovan [.net] C#: debug faster than release? vovansim replied to krum's topic in General and Gameplay ProgrammingQuote:Original post by BradSnobar That's likely a first chance exception. Good luck trying to figure out what is causing it though. Actually it's not that hard. Firstly, you can still run Release-mode compiled apps in the debugger. Secondly, you can tell the debugger to break on an exception throw even if it is caught somewhere (in other words, it will break on first-chance exceptions). See here for how to do that in VS2k3, and the same thing works in 2k5 too: How to Stop on First Chance Exceptions - Visual Studio .NET 2003 -- Vovan Worst interview vovansim replied to Dirge's topic in GDNet LoungeQuote:Original post by leiavoia That's only good if you WANT to work 50-60 hours a week. Me personally, i have too many other spare-time things i want/need to do. Getting rich is not my goal in life. As long as i can make enough money to live on and have a little bit left over, that's enough. If i could work just 10 hours a week and make $100 an hour, i'd do it! Oh, I definitely don't disagree. The reason I was giving this example was to just underscore JohnBolton's point: he said no to overtime and didn't make it in, whereas a friend of mine said yes, and did. I guess that's not too much of a surprise with this company, however.
https://www.gamedev.net/profile/43745-vovansim/
CC-MAIN-2017-34
en
refinedweb
Application Security With Apache Shiro - | - - - - - - Read later Reading List Are you frustrated when you try to secure your applications? Do you feel existing Java security solutions are difficult to use and only confuse you further? This article introduces Apache Shiro, a Java security framework that provides a simple but powerful approach to application security. It explains Apache Shiro’s project goals, architectural philosophies and how you might use Shiro to secure your own applications. What is Apache Shiro? Apache Shiro (pronounced “shee-roh”, the Japanese word for ‘castle’) (I like to call these the 4 cornerstones of application security): -. Why was Apache Shiro created? For a framework to really make a good case for its existence, and therefore a reason for you to use it, it should satisfy needs that aren’t met by other alternatives. To understand this, we need to look at Shiro’s history and the alternatives when it was created. Before entering the Apache Software Foundation in 2008, Shiro was already 5 years old and previously known as the JSecurity project, which started in early 2003. In 2003, there weren’t many general-purpose security alternatives for Java application developers - we were pretty much stuck with the Java Authentication and Authorization Service, otherwise known as JAAS. There were a lot of shortcomings with JAAS - while its authentication capabilities were somewhat tolerable, the authorization aspects were obtuse and frustrating to use. Also, JAAS was heavily tied to Virtual Machine-level security concerns, for example, determining if a class should be allowed to be loaded in the JVM. As an application developer, I cared more about what an application end-user could do rather than what my code could do inside the JVM. Due to the applications I was working with at the time, I also needed access to a clean, container-agnostic session mechanism. The only session choices in the game at the time were HttpSessions, which required a web container, or EBJ 2.1 Stateful Session Beans, which required an EJB container. I needed something that could be decoupled from the container, usable in any environment I chose. Finally, there was the issue of cryptography. There are times when we all need to keep data secure, but the Java Cryptography Architecture was hard to understand unless you were a crypto expert. The API was full of checked exceptions and felt cumbersome to use. I was hoping for a cleaner out-of-the-box solution to easily encrypt and decrypt data as necessary. So looking at the security landscape of early 2003, you can quickly realize that there was nothing that could satisfy all of those requirements in a single, cohesive framework. Because of that, JSecurity, and then later, Apache Shiro, was born. Why would you use Apache Shiro today?. Commercial companies like Katasoft also provide professional support and services if desired. Who’s Using Shiro? Shiro and its predecessor JSecurity has been in use for years in projects for companies of all sizes and across industries. Since becoming an Apache Software Foundation Top Level Project, site traffic and adoption have continued to grow significantly. Many open-source communities are using Shiro as well, for example, Spring, Grails, Wicket, Tapestry, Tynamo, Mule, and Vaadin, just to name a few. Commercial companies like Katasoft, Sonatype, MuleSoft, one of the major social networks, and more than a few New York commercial banks use Shiro to secure their commercial software and websites. Core Concepts: Subject, SecurityManager, and Realms Now that we’ve covered Shiro’s benefits, let’s jump right in to its API so you can get a feel for it. Shiro’s architecture has three main concepts - the Subject, the SecurityManager, and Realms. Subject When you’re securing your application, probably the most relevant questions to ask yourself are, “Who is the current user?” or “Is the current user allowed to do X”? It is common for us to ask ourselves these questions as we're writing code or designing user interfaces: applications are usually built based on user stories, and you want functionality represented (and secured) based on a per-user basis. So, the most natural way for us to think about security in our application is based on the current user. Shiro’s API fundamentally represents this way of thinking in its Subject concept. The word Subject is a security term that basically means "the currently executing user". It's just not called a 'User' because the word 'User' is usually associated with a human being. In the security world, the term 'Subject' can mean a human being, but also a 3rd party process, daemon account, or anything similar. It simply means 'the thing that is currently interacting with the software'. For most intents and purposes though, you can think of this as Shiro’s ‘User’ concept. You can easily acquire the Shiro Subject anywhere in your code as shown in Listing 1 below. Listing 1. Acquiring the Subject import org.apache.shiro.subject.Subject; import org.apache.shiro.SecurityUtils; ... Subject currentUser = SecurityUtils.getSubject(); Once you acquire the Subject, you immediately have access to 90% of everything you’d want to do with Shiro for the current user, such as login, logout, access their session, execute authorization checks, and more - but more on this later. The key point here is that Shiro’s API is largely intuitive because it reflects the natural tendency for developers to think in ‘per-user’ security control. It is also easy to access a Subject anywhere in code, allowing security operations to occur anywhere they are needed. SecurityManager The Subject’s ‘behind the scenes’ counterpart is the SecurityManager. While the Subject represents security operations for the current user, the SecurityManager manages security operations for all users. It is the heart of Shiro’s architecture and acts as a sort of ‘umbrella’ object that references many internally nested security components that form an object graph. However, once the SecurityManager and its internal object graph is configured, it is usually left alone and application developers spend almost all of their time with the Subject API. So how do you set up a SecurityManager? Well, that depends on your application environment. For example, a web application will usually specify a Shiro Servlet Filter in web.xml, and that will set up the SecurityManager instance. If you’re running a standalone application, you’ll need to configure it another way. But there are many of configuration options. There is almost always a single SecurityManager instance per application. It is essentially an application singleton (although it does not need to be a static singleton). Like almost all things in Shiro, the default SecurityManager implementations are POJOs and are configurable with any POJO-compatible configuration mechanism - normal Java code, Spring XML, YAML, .properties and .ini files, etc. Basically anything that is capable of instantiating classes and calling JavaBeans-compatible methods may be used. To that end, Shiro provides a default ‘common denominator’ solution via text-based INI configuration. INI is easy to read, simple to use, and requires very few dependencies. You’ll also see that with a simple understanding of object graph navigation, INI can be used effectively to configure simple object graph like the SecurityManager. Note that Shiro also supports Spring XML configuration and other alternatives, but we’ll cover INI here. The simplest example of configuring Shiro based on INI is shown in the example in Listing 2 below. Listing 2. Configuring Shiro with INI [main] cm = org.apache.shiro.authc.credential.HashedCredentialsMatcher cm.hashAlgorithm = SHA-512 cm.hashIterations = 1024 # Base64 encoding (less text): cm.storedCredentialsHexEncoded = false iniRealm.credentialsMatcher = $cm [users] jdoe = TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJpcyByZWFzb2 asmith = IHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbXNoZWQsIG5vdCB In Listing 2, we see the example INI configuration we will use to configure the SecurityManager instance. There are two INI sections: [main] and [users]. The [main] section is where you configure the SecurityManager object and/or any objects (like Realms) used by the SecurityManager. In this example, we see two objects being configured: - The cm object, which is an instance of Shiro’s HashedCredentialsMatcher class. As you can see, various properties of the cm instance are being configured via 'nested dot' syntax - a convention used by the IniSecurityManagerFactory shown in Listing 3, to represent object graph navigation and property setting. - The iniRealm object, which is a component used by the SecurityManager to represent user accounts defined in the INI format. The [users] section is where you can specify a static list of user accounts - convenient for simple applications or when testing. For the purposes of this introduction, it is not important to understand the intricacies of each section, but rather to see that INI configuration is one simple way of configuring Shiro. For more detailed information on INI configuration, please see Shiro's documentation. Listing 3. Loading shiro.ini Configuration File import org.apache.shiro.SecurityUtils; import org.apache.shiro.config.IniSecurityManagerFactory; import org.apache.shiro.mgt.SecurityManager; import org.apache.shiro.util.Factory; ... //1. Load the INI configuration Factory<SecurityManager> factory = new IniSecurityManagerFactory("classpath:shiro.ini"); //2. Create the SecurityManager SecurityManager securityManager = factory.getInstance(); //3. Make it accessible SecurityUtils.setSecurityManager(securityManager); In Listing 3, we see a three-step process in this simple example: - Load the INI configuration that will configure the SecurityManager and its constituent components. - Create the SecurityManager instance based on the configuration (using Shiro’s Factory concept that represents the Factory Method design pattern). - Make the SecurityManager singleton accessible to the application. In this simple example, we set it as a VM-static singleton, but this is usually not necessary - your application configuration mechanism can determine if you need to use static memory or not. Realms The third and final core concept in Shiro is that of a Realm. A Realm acts as the ‘bridge’ or ‘connector’ between Shiro and your application’s security data. That is,. More than one Realm may be configured,. Listing 4 below is an example of configuring Shiro (via INI) to use an LDAP directory as one of the application’s Realms. Listing 4. Example realm configuration snippet to connect to LDAP user data store Now that we’ve seen how to set up a basic Shiro environment, let’s discuss how you, as a developer, would go about using the framework. Authentication Authentication is the process of verifying a user's identity. That is, when a user authenticates with an application, they are proving they actually are who they say they are. This is also sometimes referred to as 'login'. This is typically a three-step process. - Collect the user’s identifying information, called principals, and supporting proof of identity, called credentials. - Submit the principals and credentials to the system. - If the submitted credentials match what the system expects for that user identity (principal), the user is considered authenticated. If they don’t match, the user is not considered authenticated. A common example of this process that everyone is familiar with is that of the username/password combination. When most users login to a software application, they usually provide their username (the principal) and their supporting password (the credential). If the password (or representation of it) stored in the system matches what the user specifies, they are considered authenticated. Shiro supports this same workflow in a simple and intuitive way. As we’ve said, Shiro has a Subject-centric API - almost everything you care to do with Shiro at runtime is achieved by interacting with the currently executing Subject. So, to login a Subject, you simply call its login method, passing an AuthenticationToken instance that represents the submitted principals and credentials (in this case, a username and password). This example is shown in Listing 5 below. Listing 5. Subject Login //1. Acquire submitted principals and credentials: AuthenticationToken token = new UsernamePasswordToken(username, password); //2. Get the current Subject: Subject currentUser = SecurityUtils.getSubject(); //3. Login: currentUser.login(token); As you can see, Shiro’s API easily reflects the common workflow. You’ll continue to see this simplicity as a theme for all of the Subject’s operations. When the login method is called, the SecurityManager will receive the AuthenticationToken and dispatch it to one or more configured Realms to allow each to perform authentication checks as required. Each Realm has the ability to react to submitted AuthenticationTokens as necessary. But what happens if the login attempt fails? What if the user specified an incorrect password? You can handle failures by reacting to Shiro’s runtime AuthenticationException as shown in Listing 6. Listing 6. Handle Failed Login //3. Login: try { currentUser.login(token); } catch (IncorrectCredentialsException ice) { … } catch (LockedAccountException lae) { … } … catch (AuthenticationException ae) {… } You can choose to catch one of the AuthenticationException subclasses and react specifically, or generically handle any AuthenticationException (for example, show the user a generic “Incorrect username or password” message). The choice is yours depending on your application requirements. After a Subject logs-in successfully, they are considered authenticated and usually you allow them to use your application. But just because a user proved their identity doesn’t mean that they can do whatever they want in your application. That begs the next question, “how do I control what the user is allowed to do or not?” Deciding what users are allowed to do is called authorization. We’ll cover how Shiro enables authorization next. Authorization Authorization is essentially access control - controlling what your users can access in your application, such as resources, web pages, etc. Most users perform access control by employing concepts such as roles and permissions. That is, a user is usually allowed to do something or not based on what roles and/or permissions are assigned to them. Your application can then control what functionality is exposed based on checks for these roles and permissions. As you might expect, the Subject API allows you to perform role and permission checks very easily. For example, the code snippet in Listing 7 shows how to check if a Subject has been assigned a certain role. Listing 7. Role Check if ( subject.hasRole(“administrator”) ) { //show the ‘Create User’ button } else { //grey-out the button? } As you can see, your application can enable or disable functionality based on access control checks. Permission checks are another way to perform authorization. Checking for roles as in the example above suffers from one significant flaw: you can’t add or delete roles at runtime. Your code is hard-coded with role names, so if you changed the role names and/or configuration, your code would be broken! If you need to be able to change a role’s meaning at runtime, or add or delete roles as desired, you have to rely on something else. To that end, Shiro supports its notion of permissions. A permission is a raw statement of functionality, for example ‘open a door’, ‘create a blog entry’, ‘delete the ‘jsmith’ user’, etc. By having permissions reflect your application’s raw functionality, you only need to change permission checks when you change your application’s functionality. In turn, you can assign permissions to roles or to users as necessary at runtime. As an example, shown in Listing 8 below, we can rewrite our previous role check and instead use a permission check. Listing 8. Permission Check if ( subject.isPermitted(“user:create”) ) { //show the ‘Create User’ button } else { //grey-out the button? } This way, any role or user assigned the “user:create” permission can click the ‘Create User’ button, and those roles and assignments can even change at runtime, providing you with a very flexible security model. The “user:create” string is an example of a permission string that adheres to certain parsing conventions. Shiro supports this convention out of the box with its WildcardPermission. Although out of scope for this introduction article, you’ll see that the WildcardPermission can be extremely flexible when creating security policies, and even supports things like instance-level access control. Listing 9. Instance-Level Permission Check if ( subject.isPermitted(“user:delete:jsmith”) ) { //delete the ‘jsmith’ user } else { //don’t delete ‘jsmith’ } This example shows that you can control, even down to a very fine-grained instance level, access to individual resources if you needed to. You could even invent your own permission syntax if you wanted to. See the Shiro Permission documentation for more information. Finally, just as with authentication, the above calls eventually make their way to the SecurityManager, which will consult one or more Realms to make the access control decisions. This allows a Realm to respond to both authentication and authorization operations as necessary. So that is a brief overview of Shiro’s authorization capabilities. And while most security frameworks stop at authentication and authorization, Shiro provides much more. Next we’ll talk about Shiro’s advanced Session Management capabilities. Session Management Apache Shiro provides something unique in the world of security frameworks: a consistent Session API usable in any application and any architectural tier. That is, Shiro enables a Session programming paradigm for any application - from small daemon standalone applications to the largest clustered web applications. This means that application developers who wish to use sessions are no longer forced to use Servlet or EJB containers if they don’t need them otherwise. Or, if using these containers, developers now have the option of using a unified and consistent session API in any tier, instead of servlet or EJB-specific mechanisms. But perhaps one of the most important benefits of Shiro’s sessions is that they are container-independent. This has subtle but extremely powerful implications. For example, let’s consider session clustering. How many container-specific ways are there to cluster sessions for fault-tolerance and failover? Tomcat does it differently than Jetty, which does it differently than Websphere, etc. But with Shiro sessions, you obtain a container-independent clustering solution. Shiro’s architecture allows for pluggable Session data stores, such as enterprise caches, relational databases, NoSQL systems and more. This means that you can configure session clustering once and it will work the same way regardless of your deployment environment - Tomcat, Jetty, JEE Server or standalone application. There is no need to reconfigure your app based on how you deploy your application. Another benefit of Shiro’s sessions is session data can be shared across client technologies if desired. For example, a Swing desktop client can participate in the same web application session if desired - useful if the end-user is using both simultaneously. So how do you access a Subject’s session in any environment? There are two Subject methods as shown in the example below. Listing 10. Subject’s Session Session session = subject.getSession(); Session session = subject.getSession(boolean create); As you can see, the methods are identical in concept to the HttpServletRequest API. The first method will return the Subject’s existing Session, or if there isn’t one yet, it will create a new one and return it. The second method accepts a boolean argument that determines whether or not a new Session will be created if it does not yet exist. Once you acquire the Subject’s Session, you can use it almost identically to an HttpSession. The Shiro team felt that the HttpSession API was most comfortable to Java developers, so we retained much of its feel. The big difference, of course, is that you can use Shiro Sessions in any application, not just web applications. Listing 11 shows this familiarity. Listing 11. Session methods Session session = subject.getSession(); session.getAttribute(“key”, someValue); Date start = session.getStartTimestamp(); Date timestamp = session.getLastAccessTime(); session.setTimeout(millis); ... Cryptography Cryptography is the process of hiding or obfuscating data so prying eyes can’t understand it. Shiro’s goal in cryptography is to simplify and make usable the JDK’s cryptography support. It is important to note that cryptography is not specific to Subjects in general, so it is one area of Shiro’s API that is not Subject-specific. You can use Shiro’s cryptography support anywhere, even if a Subject is not being used. The two areas where Shiro really focuses its cryptography support is in the areas of cryptographic hashes (aka message digests) and cryptographic ciphers. Let's take a look at these two in more detail. Hashing If you have used the JDK’s MessageDigest class, you quickly realize that it is a bit cumbersome to work with. It has an awkward static method factory-based API instead of an object-oriented one, and you are forced to catch checked exceptions that may never need to be caught. If you need to hex-encode or Base64-encode message digest output, you’re on your own - there’s no standard JDK support for either. Shiro addresses these issues in a clean and intuitive hashing API. For example, let’s consider the relatively common case of MD5-hashing a file and determining the hex value of that hash. Called a ‘checksum’, this is used regularly when providing file downloads - users can perform their own MD5 hash on the downloaded file and assert that their checksum matches the one on the download site. If they match, the user can sufficiently assume that the file hasn’t been tampered with in transit. Here is how you might try to do this without Shiro: - Convert the file to a byte array. There is nothing in the JDK to assist with this, so you’ll need to create a helper method that opens a FileInputStream, uses a byte buffer, and throws the appropriate IOExceptions, etc. - Use the MessageDigest class to hash the byte array, dealing with the appropriate exceptions, as shown in Listing 12 below. - Encode the hashed byte array into hex characters. There is nothing in the JDK to assist with this either, so you’ll need to create another helper method and probably use bitwise operations and bitshifting in your implementation. Listing 12. JDK’s MessageDigest try { MessageDigest md = MessageDigest.getInstance("MD5"); md.digest(bytes); byte[] hashed = md.digest(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } That’s a significant amount of work for something so simple and relatively common. Now here’s how to do the exact same thing with Shiro. String hex = new Md5Hash(myFile).toHex(); It is remarkably simpler and easier to understand what is going on when you use Shiro to simplify all of that work. SHA-512 hashing and Base64-encoding of passwords is just as easy. String encodedPassword = new Sha512Hash(password, salt, count).toBase64(); You can see how much Shiro simplifies hashing and encoding, saving you a bit of sanity in the process. Ciphers Ciphers are cryptographic algorithms that can reversibly transform data using a key. We use them to keep data safe, especially when transferring or storing data, times when data is particularly susceptible to prying eyes. If you’ve ever used the JDK Cryptography APIs, and in particular the javax.crypto.Cipher class, you know that it can be an incredibly complex beast to tame. For starters, every possible Cipher configuration is always represented by an instance of javax.crypto.Cipher. Need to do public/private key cryptography? You use the Cipher. Need to use a Block Cipher for streaming operations? You use the Cipher. Need to create an AES 256-bit Cipher to secure data? You use the Cipher. You get the idea. And how do you create the Cipher instance you need? You create a complex, unintuitive token-delimited String of cipher options, called a “transformation string”, and you pass this string to a Cipher.getInstance static factory method. With this cipher option String approach, there is no type-safety to ensure you’re using valid options. This also implicitly means that there is no JavaDoc to help you understand relevant options. And you are further required to deal with checked exceptions in case your String is incorrectly formulated, even if you know that the configuration is correct. As you can see, working with JDK Ciphers is quite a cumbersome task. These techniques were once standard for Java APIs a long time ago, but times have changed, and we want a much easier approach. Shiro tries to simplify the entire concept of cryptographic ciphers by introducing its CipherService API. A CipherService is what most developers want when securing data: a simple, stateless, thread-safe API that can encrypt or decrypt data in its entirety in one method call. All you need to do is provide your key, and you can encrypt or decrypt as necessary. For example, 256-bit AES encryption can be used as shown in the Listing 13 below. Listing 13. Apache Shiro’s Encryption API AesCipherService cipherService = new AesCipherService(); cipherService.setKeySize(256); //create a test key: byte[] testKey = cipherService.generateNewKey(); //encrypt a file’s bytes: byte[] encrypted = cipherService.encrypt(fileBytes, testKey); The Shiro example is simpler compared to the JDK’s Cipher API: - You can instantiate a CipherService directly - no strange or confusing factory methods. - Cipher configuration options are represented as JavaBeans-compatible getters and setters - there is no strange and difficult to understand “transformation string”. - Encryption and decryption is executed in a single method call. - No forced checked exceptions. Catch Shiro’s CryptoException if you want. There are other benefits to Shiro’s CipherService API, such as the ability to support both byte array-based encryption/decryption (called ‘block’ operations) as well as stream-based encryption/decryption (for example, encrypting audio or video). Java Cryptography doesn’t need to be painful. Shiro’s Cryptography support is meant to simplify your efforts to keep your data secure. Web Support Last, but not least, we’ll briefly introduce Shiro’s web support. Shiro ships with a robust web support module to help secure web applications. It is simple to set up Shiro for a web application. The only thing necessary is to define a Shiro Servlet Filter in web.xml. Listing 14 shows this code. Listing 14. ShiroFilter in web.xml <filter> <filter-name>ShiroFilter</filter-name> <filter-class> org.apache.shiro.web.servlet.IniShiroFilter </filter-class> <!-- no init-param means load the INI config from classpath:shiro.ini --> </filter> <filter-mapping> <filter-name>ShiroFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> This filter can read the aforementioned shiro.ini config so you have a consistent configuration experience regardless of deployment environment. Once configured, the Shiro Filter will filter every request and ensure the request-specific Subject is accessible during the request. And because it filters every request, you can perform security-specific logic to ensure only requests that meet certain criteria are allowed through. URL-Specific Filter Chains Shiro supports security-specific filter rules through its innovative URL filter chaining capability. It allows you to specify ad-hoc filter chains for any matching URL pattern. This means you have a great deal of flexibility in enforcing security rules (or combinations of rules) using Shiro’s filtering mechanisms - much more so than you could achieve defining filters in web.xml alone. Listing 15 shows the configuration snippet in Shiro INI. Listing 15. Path-specific Filter Chains [urls] /assets/** = anon /user/signup = anon /user/** = user /rpc/rest/** = perms[rpc:invoke], authc /** = authc As you can see, there is a [urls] INI section available to web applications. For each line, the values on the left of the equals sign represent a context-relative web application path. The values on the right define a Filter chain - an ordered, comma delimited list of Servlet filters to execute for the given path. Each filter is a normal Servlet Filter, but the filter names you see above (anon, user, perms, authc) are special security-related filters that Shiro provides out-of-the-box. You can mix and match these security filters to create a very custom security experience. You can also specify any other existing Servlet Filter you may have. How much nicer is this compared to using web.xml, where you define a block of filters and then a separate disconnected block of filter patterns? Using Shiro’s approach, it is much easier to see exactly the filter chain that is executed for a given matching path. If you wanted to, you could define only the Shiro Filter in web.xml and define all of your other filters and filter chains in shiro.ini for a much more succinct and easy to understand filter chain definition mechanism than web.xml. Even if you didn’t use any of Shiro’s security features, this one small convenience alone can make Shiro worth using. JSP Tag Library Shiro also provides a JSP tag library that allows you to control the output of your JSP pages based on the current Subject’s state. One common example of where this is useful is in displaying the ‘Hello <username>’ text after a user is logged-in. But if they are anonymous, you might want to show something else, like “Hello! Register Today!” instead. Listing 16 shows how you might support this with Shiro’s JSP tags. Listing 16. JSP Taglib Example <%@ taglib prefix="shiro" uri="" %> ... <p>Hello <shiro:user> <!-- shiro:principal prints out the Subject’s main principal - in this case, a username: --> <shiro:principal/>! </shiro:user> <shiro:guest> <!-- not logged in - considered a guest. Show the register link: --> ! <a href=”register.jsp”>Register today!</a> </shiro:guest> </p> There are other tags that allow you to include output based on what roles they have (or don’t have), what permissions are assigned (or not assigned), and whether they are authenticated, remembered from “Remember Me” services, or an anonymous guest. There are many other web-specific features that Shiro supports, like simple “Remember Me” services, REST and BASIC authentication, and of course transparent HttpSession support if you want to use Shiro’s native enterprise sessions. Please see the Apache Shiro web documentation for more. Web Session Management Finally, it is interesting to point out Shiro’s support for sessions in a web environment. Default Http Sessions For web applications, Shiro defaults its session infrastructure to use the existing Servlet Container sessions that we’re all used to. That is, when you call the methods subject.getSession() and subject.getSession(boolean) Shiro will return Session instances backed by the Servlet Container’s HttpSession instance. The beauty of this approach is that business-tier code that calls subject.getSession() interacts with a Shiro Session instance - it has no ‘knowledge’ that it is working with a web-based HttpSession object. This is a very good thing when maintaining clean separation across architectural tiers. Shiro’s Native Sessions in the Web Tier If you’ve enabled Shiro’s native session management in a web application because you need Shiro’s enterprise session features (like container-independent clustering), you of course want the HttpServletRequest.getSession() and HttpSession API to work with the ‘native’ sessions and not the servlet container sessions. It would be very frustrating if you had to refactor any code that uses the HttpServletRequest and HttpSession API to instead use Shiro’s Session API. Shiro of course would never expect you to do this. Instead, Shiro fully implements the Session part of the Servlet specification to support native sessions in web applications. This means that whenever you call a corresponding HttpServletRequest or HttpSession method call, Shiro will delegate these calls to its internal native Session API. The end result is that you don’t have to change your web code even if you’re using Shiro’s ‘native’ enterprise session management - a very convenient (and necessary) feature indeed. Additional Features There are other features in the Apache Shiro framework that are useful to securing Java applications, such as: - Threading and Concurrency support for maintaining Subjects across threads (Executor and ExecutorService support) - Callable and Runnable support for executing logic as a specific Subject - “Run As” support for assuming the identity of another Subject (e.g. useful in administrative applications) - Test harness support, making it very easy to have full testing of Shiro secured-code in unit and integration tests Framework Limitations As much as we might like it to be, Apache Shiro is not a ‘silver bullet’ - it won’t solve every security problem effortlessly. There are things that Shiro does not address that might be worth knowing: - Virtual Machine-level concerns: Apache Shiro does not currently deal with Virtual Machine level security, such as the ability to prevent certain classes from loading in a class loader based on an access control policy. However, it is not unconceivable that Shiro could integrate with the existing JVM security operations - it is just that no one has contributed such work to the project. - Multi-Stage Authentication: Shiro does not currently natively support ‘multi-stage’ authentication, where a user might login via one mechanism, only to be asked to then log-in again with a different mechanism. This has been accomplished in Shiro-based applications however by the application collecting all required information up front and then interacting with Shiro. It is a real possibility that this feature will be supported in a future Shiro version. - Realm Write Operations: Currently all Realm implementations support ‘read’ operations for acquiring authentication and authorization data to perform logins and access control. ‘Write’ operations, like creating user accounts, groups and roles, or associating users with roles groups and permissions, are not supported. This is because the data model to support these operations varies dramatically across applications and it would be difficult to enforce a ‘write’ API on all Shiro users. Upcoming Features Apache Shiro’s community continues to grow every day, and with it, so does Shiro’s features. In upcoming versions, you are likely to see: - Cleaner Web Filter mechanism that allows more pluggable filtering support without subclassing. - More pluggable default Realm implementations favoring composition over inheritance. You will be able to plug in components that look up authentication and authorization data instead of requiring that you subclass a Shiro Realm implementation - Robust OpenID and OAuth (and possibly Hybrid) client support - Captcha support - Easier configuration for 100% stateless applications (e.g. many REST environments). - Multi-stage authentication via a request/response protocol. - Coarse-grained authorization via an AuthorizationRequest. - ANTLR grammar for security assertion queries (e.g. (‘role(admin) && (guest || !group(developer))’) Summary Apache Shiro is a full-featured, robust, and general-purpose Java security framework that you can use to secure your applications. By simplifying four areas of application security, namely, Authentication, Authorization, Session Management, and Cryptography, application security is much easier to understand and implement in real applications. Shiro’s simple architecture and JavaBeans compatibility allow it to be configured and used in almost any environment. Additional web support and auxiliary features like multithreading and test support round out the framework to provide what could be your ‘one stop shop’ for application security. Apache Shiro’s development team continues to move forward, refining the codebase and supporting the community. With continued open-source and commercial adoption, Shiro is only expected to grow stronger. Resources - Apache Shiro’s homepage. - Shiro’s Download Page, with additional information for Maven and Ant+Ivy users. - Apache Shiro’s Documentation Page, with Guides and a Reference Manual - An Apache Shiro presentation video and slides, presented by the project’s PMC Chair, Les Hazlewood. - Other Apache Shiro articles and presentations - Apache Shiro mailing lists and forums. - Katasoft - a company offering Apache Shiro professional support and application security products. About the Author Les Hazlewood is the Apache Shiro PMC Chair and co-founder and CTO of Katasoft, a start-up focusing on application security products and Apache Shiro professional support. Les has 10 years of experience as a professional Java developer and enterprise architect, with previous senior roles at Bloomberg, Delta Airlines, and JBoss. Les has been actively involved in Open Source development for more than 9 years, committing or contributing to projects like the Spring Framework, Hibernate, JBoss, OpenSpaces, and of course JSecurity, Apache Shiro's predecessor. Les currently lives in San Mateo, CA and practices Kendo and studies Japanese when he is not programming. Spring Security? by Andrew Swan Re: How does it compare to Spring Security? by bruce b I'm hoping that Shiro is more balanced in its abstractions, giving the developer the same level of support regardless of the runtime environment, be it lightweight web, full JEE/EJB, custom client/server or mobile. Apache Shiro vs. Spring Security by Les Hazlewood This is a hard question to answer in a thread post. It really deserves its own full blog post or article. But in lieu of that, the basic idea is that they differ based on scope and mental model/design. To the best of my knowledge, Shiro has a broader scope than Spring Security in that it also addresses problems associated with enterprise session management (agnostic session clustering, SSO, etc) as well as cryptography, concurrency, etc. Shiro also has one 'killer feature' that people reference a lot: built-in support for very fine-grained (e.g. instance-level) access control (see the 'Permissions' section in the article). There's much more than that of course, but that's just one example. Also, Shiro was designed from day one to work in all application environments (Spring, JEE, command line, smartphone, etc) - not just Spring environments. My personal approach to Shiro is that it should be a 'one stop shop' solution for application security, no matter what your container or architecture is. Finally, the feedback I've heard from our own end-users that also use Spring and Grails is that Shiro is just a lot easier to use and understand (there are quite a few blog posts, forum postings and Stack Overflow discussions that corroborate this, but don't take my word for it as I'm obviously biased :). Ultimately any decision you make should be based upon your own mental model/understanding. Give both a try and see what you prefer. Both frameworks are top notch, maintained by top-notch people - their scope and design/mental models are just different. HTH! Best, Les Integration with existing authentication mechanism by Gaetan Pitteloud I'm thinking about a JEE server that performs authentication and roles assignment, then Shiro gets this info (generally from proprietary server API) and creates a assumably fully-authenticated Subject. Re: Integration with existing authentication mechanism by Les Hazlewood Yep, absolutely - delegated authentication is pretty easy to support. Probably the easiest way to go about this is to create a custom Realm implementation that uses the proprietary API, does the lookup and returns it in a format Shiro understands. There are a couple of ways about thinking about this, but since this is specific to your environment, and not as much about the article, please ask on the Shiro mailing list and we can help you through it. Best, Les Shiro Framework Deployment by Jack Vinijtrongjit Based on my limited understanding, it seems that Shiro acts almost like a client to a user management system like MS Active Directory, DB or in house application that can supply information about users/roles/permissions through custom realm. Does this mean that for deployment, Shiro is basically deployed with every application/JVM? Are each of them caching their own instances of the same user, for example in load balancing configuration? What we are trying to avoid is client-server configuration where Shiro is just another server. That would simply kill the performance. How does it scale? Is the cache simply a local cache and that I have to implement my old shared cache for all instances to share the users information? Thanks! Re: Shiro Framework Deployment by Les Hazlewood Apache Shiro is a framework - not a standalone server - distributed as one or more .jar files that are embedded in your application's distribution (e.g. a web application's '/WEB-INF/lib' directory). Each application instance has a single Shiro SecurityManager instance configured by a developer as part of the application's configuration, and that lives in-memory/in-process per app instance. As for scale, it scales extremely well - Shiro is deployed often in extremely high performance enterprise application environments (think hundreds of web servers talking to dozens of application servers). Caching is a first-level citizen in Shiro's API and used heavily - you can configure Shiro to use any of the modern cache mechanisms for very large-scale environments (local cache, distributed cache - TerraCotta/EhCache, Coherence, GigaSpaces, Memcache, etc). Feel free to ask about the specifics in the Shiro user list, or drop me a line at les at-sign katasoft.com to talk more about your use cases. HTH, Les HTTP Digest Access Authentication by Lukas L Currently I search for a simply Java security framework, which supports HTTP digest access authentication. I read in some posts that Shiro does only support basic authentication. Is there a way to perform digest access authentication within Shiro? Thanks!
https://www.infoq.com/articles/apache-shiro
CC-MAIN-2017-34
en
refinedweb
Overview: In this project you will learn to use a basic functionality of chipKIT board in MPLAB X IDE, which is to read an analog value and output the corresponding digital reading for it. A potentiometer is a three-terminal resistor with a sliding contact that forms an adjustable voltage divider. Using a potentiometer measuring instrument, you could measure the electric potential. In this example we use the potentiometer in the chipKIT Basic IO shield. Find the same project using MPIDE here. Hardware Used: To do the analog read project, you require the following hardware devices. - chipKIT Uno 32 - chipKIT Basic IO shield - PICkit® 3 In-Circuit Debugger/Programmer - USB cable - 6 pin header to connect chipKIT board and PICkit® 3 Reference: The reference guide for each board, schematics and other resources are available on their individual homepages: - chipKIT Uno32 homepage at: - chipKIT Basic I/O Shield homepage: Procedure: - Connect the hardware devices. The PICkit® 3 should be connected to the chipKIT Uno32 board through a suitable header and use the USB cable to connect the PICkit® 3 to your PC. - Place the IO shield on top of the Uno32 board with a bit of a firm press. A0-potentiometer of the IO shield will be used as peripheral for Uno32 board, in this excercise. - Once the hardware setup is made and the device drivers are updated for PICkit® 3 on your computer, launch MPLAB X (Start>All Programs>Microchip>MPLAB X IDE>MPLAB X IDE vx.xx on Window® Machines ). - Create a new project through File>New Project. In the pop up new project window, select Standalone Project under Projects and click Next. Under Family scroll and select 32-bit MCUs (PIC32). Note the processor number from the chipKIT board’s reference manual and enter it under the Device option. Click Next. - Select PICkit3 in the Hardware Tools, click Next. Select XC32 (v1.20) under Compiler Toolchains, click Next. Give a project name to create the project. - In the Project tab on the left side of the MPLAB X IDE, right click on Source Files > New > C Main File.. and give a file name for main source code. Once the dot c file opens, select all the codes in the file and replace it with the codes below. #include <plib.h> #include <xc.h> #pragma config FPLLMUL = MUL_20, FPLLIDIV = DIV_2, FPLLODIV = DIV_1, FWDTEN = OFF #pragma config POSCMOD = HS, FNOSC = PRIPLL, FPBDIV = DIV_1 #define SYS_FREQ (80000000L) unsigned int channel4; // conversion result as read from result buffer unsigned int offset; // buffer offset to point to the base of the idle buffer int main(void) { // Configure the device for maximum performance but do not change the PBDIV // Given the options, this function will change the flash wait states and // enable prefetch cache but will not change the PBDIV. The PBDIV value // is already set via the pragma FPBDIV option above.. SYSTEMConfig(SYS_FREQ, SYS_CFG_WAIT_STATES | SYS_CFG_PCACHE); // configure and enable the ADC CloseADC10(); // ensure the ADC is off before setting the configuration // define setup parameters for OpenADC10 // Turn module on | ouput in integer | trigger mode auto | enable autosample #define PARAM1 ADC_MODULE_ON | ADC_FORMAT_INTG | ADC_CLK_AUTO | ADC_AUTO_SAMPLING_ON // define setup parameters for OpenADC10 // ADC ref external | disable offset test | disable scan mode | perform 2 samples | use dual buffers | use alternate mode #define PARAM2 ADC_VREF_AVDD_AVSS | ADC_OFFSET_CAL_DISABLE | ADC_SCAN_OFF | ADC_SAMPLES_PER_INT_9 | ADC_ALT_BUF_OFF | ADC_ALT_INPUT_OFF // define setup parameters for OpenADC10 // use ADC internal clock | set sample time #define PARAM3 ADC_CONV_CLK_INTERNAL_RC | ADC_SAMPLE_TIME_15 // define setup parameters for OpenADC10 // do not assign channels to scan #define PARAM4 SKIP_SCAN_ALL // define setup parameters for OpenADC10 // set AN2 as analog inputs #define PARAM5 ENABLE_AN2_ANA // use ground as neg ref for A | use AN2 for input A // configure to sample AN2 SetChanADC10( ADC_CH0_NEG_SAMPLEA_NVREF | ADC_CH0_POS_SAMPLEA_AN2); // configure to sample AN2 OpenADC10( PARAM1, PARAM2, PARAM3, PARAM4, PARAM5 ); // configure ADC using parameter define above EnableADC10(); // Enable the ADC while ( ! mAD1GetIntFlag() ) { } // wait for the first conversion to complete so there will be vaild data in ADC result registers // the result of the conversion is available in channel4. while (1) { channel4 = ReadADC10(0); // read the result of channel 4 conversion from the idle buffer } return 0; } - To read the potentiometer value, access the watch window from Window > Debugging > Watches or you can use the shortcut key Alt+Shift+2. In the watch window panel that opens on the MPLAB X IDE, enter the new watch as channel4, in the blank <Enter new watch> area as shown below in the figure. - In the program editor window, put a breakpoint by clicking on the line number after the code line channel4 = ReadADC10(0); // read the result of channel 4 conversion from the idle bufferThat is on the line with the closing braces of the while loop. This should create a red box in the number line as shown in the figure below, - Click on Debug Project icon shown below, - Once the program halts its execution at the line with the breakpoint, click on Watches output window, right click on the expand button under Value column and select Display value column as > Decimal. The value that appear in this column for channel4 variable corresponds to the potentiometer reading which is read in the range of 0 to 1023. A value 0 corresponds to 0 volts and a value 1023 corresponds to 3.3 volts. - Now, click on Continue button from the top pane and observe the value of channel4 to change and reflect the potentiometer reading as described before. - If you would like to change the potentiometer knob and see a different value, then, change the potentiometer knob and click on the Reset button on the top pane. This will initialize the variables back to zero. Clicking on the Play button on the top pane, will get your changed potentiometer value on the Watches window. Below is a screenshot showing the potentiometer value for 3.3 volts being read in the Watch window. VN:F [1.9.22_1171] VN:F [1.9.22_1171]
http://chipkit.net/tag/breakpoint-mplab/
CC-MAIN-2017-34
en
refinedweb
On Wednesday 24 October 2007 21:12, Kay Sievers wrote:> On 10/24/07, Nick Piggin <nickpiggin@yahoo.com.au> wrote:> > On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:> > > Nick Piggin wrote:> > > > One thing I'd suggest is not to use debugfs, if it is going to> > > > be a useful end-user feature.> > >> > > Is /sys/kernel/notifier_name/ an appropriate place?> >> > I'm curious about the /sys/kernel/ namespace. I had presumed that> > it is intended to replace /proc/sys/ basically with the same> > functionality.>> It was intended to be something like /proc/sys/kernel/ only.Really? So you'd be happy to have a /sys/dev /sys/fs /sys/kernel/sys/net /sys/vm etc? "kernel" to me shouldn't really imply thestuff under the kernel/ source directory or other random stuffthat doesn't fit into another directory, but attributes that aredirectly related to the kernel software (rather than directlyassociated with any device).> > I _assume_ these are system software stats and> > tunables that are not exactly linked to device drivers (OTOH,> > where do you draw the line? eg. Would filesystems go here?>> We already have /sys/fs/ ?>> > Core network algorithm tunables might, but per interface ones probably> > not...).>> We will merge the nonsense of "block/", "class/" and "bus/" to one> "subsystem". The block, class, bus directories will only be kept as> symlinks for compatibility. Then every subsystem has a directory like:> /sys/subsystem/block/, /sys/subsystem/net/ and the devices of the> subsystem are in a devices/ directory below that. Just like the> /sys/bus/< name>/devices/ layout looks today. All subsystem-global> tunables can go below the /sys/subsystem/<name>/ directory, without> clashing with the list of devices or anything else.Makes sense.> > I don't know. Is there guidelines for sysfs (and procfs for that> > matter)? Is anyone maintaining it (not the infrastructure, but> > the actual content)?>> Unfortunately, there was never really a guideline.>> > It's kind of ironic that /proc/sys/ looks like one of the best> > organised directories in proc, while /sys/kernel seems to be in> > danger of becoming a mess: it has kexec and uevent files in the> > base directory, rather than in subdirectories...>> True, just looking at it now, people do crazy things like:> /sys/kernel/notes, which is a file with binary content, and a name> nobody will ever be able to guess what it is good for. That should> definitely go into a section/ directory. Also the VM stuff there> should probably move to a /sys/vm/ directory along with the weird> placed top-level /sys/slab/.Top level directory IMO should be kept as sparse as possible. Ifyou agree to /sys/mm for example, that's fine, but then slab shouldgo under that. (I'd prefer all to go underneath /sys/kernel, but...).It would be nice to get a sysfs content maintainer or two. Justhaving new additions occasionally reviewed along with the rest ofa patch, by random people, doesn't really aid consistency. Would itbe much trouble to ask that _all_ additions to sysfs be accompaniedby notification to this maintainer, along with a few line description?(then merge would require SOB from said maintainer).For that matter, this should be the case for *all* userspace APIchanges (kernel-user-api@vger.kernel.org?)-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/10/24/655
CC-MAIN-2017-34
en
refinedweb
I recently accepted Packt Publishing's invitation to review Rami Sarieddine's book Developing Windows Store Apps with HTML5 and JavaScript. The Preface of the book describes the book as "a practical, hands-on guide that covers the basic and important features of a Windows Store app along with code examples that will show you how to develop these features." The Preface adds that the book is for "all developers who want to start creating apps for Windows 8" and for "everyone who wants to learn the basics of developing a Windows Store app." Chapter 1 of Developing Windows Store Apps with HTML5 and JavaScript introduces "HTML5 structural elements" (semantic elements, media elements, form elements, custom data attributes) supported in the Windows 8 environment. The section on semantic elements covers elements such as <header>, <nav>, <article>, and <address>. The section on media elements provides detailed coverage of the <video> and <audio> elements. The section on form elements discusses the "new values for the type attribute are introduced to the <input> element." A table is used to display the various types (examples include tel, email, and search) with descriptions. There is discussion on these input types along with how to add validation to the input types. Most of this initial chapter of Developing Windows Store Apps with HTML5 and JavaScript covers general HTML5 functionality, but there are a few references to items specific to Windows 8. For example, the last new material before the first chapter's Summary is on "using the Windows Library for JavaScript (WinJS) to achieve more advanced binding of data to HTML elements."Chapter 2: Styling with CSS3 Like the first chapter, Chapter 2 focuses mostly on a general web concept, in this case Cascading Style Sheets (CSS). Sarieddine states that CSS is responsible for "defining the layout, the positioning, and the styling" of HTML elements such as those covered in the first chapter. In introducing CSS, the second chapter of Developing Windows Store Apps with HTML5 and JavaScript provides an overview of four standard selectors (asterisk, ID, class, and element), attribute selectors (including prefix, suffix, substring [AKA contains], hyphen, and whitespace), combinator selectors (including descendant, child/direct, adjacent sibling, and general sibling), pseudo-class selectors, and pseudo-element selectors. Chapter 2 does cover some Microsoft/Windows-specific items. Specifically, the chapter introduces the Grid layout and the Flexbox layout. The author explains that these have -ms prefixes because they are currently specific to Microsoft (Windows 8/Internet Explorer 10), but that they are moving through the W3C standardization process. The second chapter of Developing Windows Store Apps with HTML5 and JavaScript covers animation with CSS and introduces CSS transforms before concluding with brief discussion of CSS media queries.Chapter 3: JavaScript for Windows Apps Developing Windows Store Apps with HTML5 and JavaScript's third chapter covers "features provided by the Windows Library for JavaScript (the WinJS library) that has been introduced by Microsoft to provide access to Windows Runtime for the Windows Store apps using JavaScript." The delivered implication of this is that this is the first chapter of the book that is heavily focused on developing specifically Windows Store Apps. Sarieddine covers use of Promise objects to implement asynchronous programming in JavaScript's single-threaded environment rather than using callback functions directly. The author also covers use of the WinJS.Utilities namespace wrappers of document.querySelector and querySelectorAll. Coverage of the WinJS.xhr function begins with the description of it being a wrapper to "calls to XMLHttpRequest in a Promise object." Chapter 3 concludes with a discussion of "standard built-in HTML controls" as well as WinJS-provided controls "new and feature-rich controls designed for Windows Store apps using JavaScript." This discussion includes how WinJS-provided controls are handled differently in terms of code than standard HTML controls. This third chapter is heavily WinJS-oriented. It also includes the first non-trivial discussion and illustrations related to use of Visual Studio, a subject receiving even more focus in the fourth chapter.Chapter 4: Developing Apps with JavaScript Chapter 4 of Developing Windows Store Apps with HTML5 and JavaScript is intended to help the reader "get started with developing a Windows 8 app using JavaScript." It was early in this chapter that I learned that Windows Store apps run only on Windows 8. The chapter discusses two approaches for acquiring Windows 8 and downloading necessary development tools such as Visual Studio Express 2012 for Windows 8 from Windows Dev Center. The chapter discusses how to obtain or renew a free developer license via Visual Studio. The fourth chapter also discusses languages other than HTML5/CSS3 that can be used to develop Windows Store apps. It then moves onto covering development using Visual Studio templates. Several pages are devoted to discussion on using these standard templates and there are several illustrations of applying Visual Studio in this development.Chapter 5: Binding Data to the App The fifth chapter of Developing Windows Store Apps with HTML5 and JavaScript discusses "how to implement data binding from different data sources to the elements in the app." As part of this discussion of data binding, the chapter covers the WinJS.Binding namespace ("Windows library for JavaScript binding") for binding styles and data to HTML elements. Examples in this section illustrate updating of HTML elements' values and styles. Interestingly, it is in this fifth chapter that the author points out that "Windows 8 JavaScript has native support for JSON." The chapter's examples also discuss and illustrate use of Windows.Storage. Chapter 5's coverage of formatting and displaying data introduces "the most famous controls" of ListView and FlipView and then focuses on ListView. This portion of the chapter then moves on to illustrate use of WinJS templates (WinJS.Binding.Template). The final topic of Chapter 5 is sorting and filtering data and more example code is used here for illustration. Chapter 6 focuses on how to make a Windows 8 application "responsive so that it handles screen sizes and view state changes and responds to zooming in and out." The chapter begins by introducing view states: full screen landscape, full screen portrait, snapped view, and filled view. The chapter discusses snapping (required for apps to support) and rotation (recommended for apps to support). It then moves onto covering use of "CSS media queries" and "JavaScript layout change events." Chapter 6 also introduces semantic zoom, described on the Guidelines for Semantic Zoom page as "a touch-optimized technique used by Windows Store apps in Windows 8 for presenting and navigating large sets of related data or content within a single view." Sarieddine describes semantic zoom as a technique "used by Windows Store apps for presenting—in a single view—two levels of detail for large sets of related content while providing quicker navigation." There are several pages of code illustrations and explanatory text on incorporating semantic zoom in the Windows 8 application.Chapter 7: Making the App Live with Tiles and Notifications The seventh chapter of Developing Windows Store Apps with HTML5 and JavaScript introduces the concept of Windows 8 tiles. The chapter discusses the app tile ("a core part of your app") and live tiles ("shows the best of what's happening inside the app"). Windows 8 Badges and Notifications are also covered in this chapter.Chapter 8: Signing Users In Chapter 8 is focused on authentication in a Windows 8 app. The chapter discusses use of the Windows 8 SDK and "a set of APIs" that "allow Windows Store apps to enable single sign on with Microsoft accounts and to integrate with info in Microsoft SkyDrive, Outlook.com, and Windows Live Messenger." The eighth chapter's coverage includes discussion of open standards supported by Live Connect: OAuth 2.0, REST, and JSON. The chapter also covers reserving an app name on the Windows store, working with Visual Studio 2012 for Windows 8, and working with Live SDK downloads.Chapter 9: Adding Menus and Commands Chapter 9 of Developing Windows Store Apps with HTML5 and JavaScript focuses on adding menus and commands to the app bar. This coverage includes discussion on where to place the app bar and how the UX guidelines recommend placing the app bar on the bottom because the navigation bar goes on top of a Windows 8 app.Chapter 10: Packaging and Publishing Developing Windows Store Apps with HTML5 and JavaScript's tenth chapter introduces the Windows Store and likens it to "a huge shopping mall" in which the reader's new app would be like "a small shop in that mall." The author states that the Windows Store Dashboard is "the place where you submit the app, pave its way to the market, and monitor how it is doing there." The first step in the process of submitting a Windows app to the Windows Store for certification was covered in the chapter on authentication (Chapter 8) and this chapter picks up where that left off. Steps covered in this chapter include providing the application name, setting the "selling details," adding services, setting age and rating certifications, specifying cryptography and encryption used by the app, uploading app packages generated with Visual Studio, adding app description and other metadata about the app, and notes to testers evaluating app for Windows Store. The chapter moves from coverage of the Windows App submission process using Windows Store Dashboard to using Visual Studio's embedded Windows Store support. Of particular interest in this section is coverage of how to use Visual Studio to package a Windows 8 app so that the "package is consistent with all the app-specific and developer-specific details that the Store requires." The majority of this chapter's examples depend on having a Windows Store developer account. The chapter also includes a reference to a page on avoiding common certification failures.Chapter 11: Developing Apps with XAML All of the earlier chapters of Developing Windows Store Apps with HTML5 and JavaScript focused on developing Windows Store apps with traditional web development technologies HTML, CSS, and JavaScript, but the final chapter looks at using different platforms for creating Windows Store Apps. Although most of this chapter looks at developing Windows Store apps using the alternate development platform of XAML/C#, there is brief discussion of more general considerations when using alternate platforms for developing Windows Store apps. The chapter specifically mentions multiple approaches using C++ and C# to develop Windows Store apps. Using Extensible Application Markup Language (XAML) for developing Windows 8 applications is described similar to the approach used for JavaScript as discussed earlier in this book. One of the examples demonstrates using Visual Studio standard Windows Store App templates such as Blank App (XAML), Grid App (XAML), and Split App (XAML). The chapter dives into basics of developing an XAML-based Windows Store app and introduces XAML based on HTML and XML concepts and differences. The final chapter has a "Summary" section, but the final paragraph of that chapter is actually a summary of the entire book. A potential purchaser of this book could read this final paragraph on page 158 to get a quick overview of what the book covers.Targeted Audience Developing Windows Store Apps with HTML5 and JavaScript is well-titled in terms of describing what the book is about. The book clearly fulfills its objective of demonstrating how to use HTML5 and JavaScript to develop Windows Store Apps. Although the book does briefly discuss other technologies and platforms for building Windows Store Apps, these discussions are very brief and and mostly references rather than detailed descriptions. The reader most likely to benefit from this book is a developer interested in applying HTML, JavaScript, and CSS to develop Windows Store apps. The book does provide introductory material on these technologies for those not familiar with them, but at least some minor HTML/CSS/JavaScript experience would be a benefit for the reader. This book would obviously not be a good fit for someone wishing to learn how to develop apps for any environment other than the Windows Store and it would only be of marginal benefit to readers wanting to develop Windows Store apps with technologies other than HTML, JavaScript, and CSS.Conclusion Developing Windows Store Apps with HTML5 and JavaScript delivers on what its title advertises. It provides as comprehensive of an introduction as roughly 160 pages allows to developing and deploying Windows Store apps using JavaScript and HTML. Packt Publishing provided me a PDF for this review and one of the advantages of the electronic form is the numerous screen snapshots of Windows 8 apps and Visual Studio are in full color. I especially liked that little time was wasted in the book and it efficiently covered quite a bit of ground in a relatively short number of pages.Additional Information Here are some additional references related to this book including other reviews of this book. - Packt Publishing Developing Windows Store Apps with HTML5 and JavaScript Page - Rami Sarieddine's (Author's) Blog: My Book is Out - Damir Arh's (DZone) Book Review of Developing Windows Store Apps with HTML5 and JavaScript - Wriju's Blog: Book Review : Developing Windows Store Apps with HTML5 and JavaScript - Juri Strumpflohner: Developing Windows Store Apps with HTML5 and JavaScript - Abhijit Jana: Developing Windows Store Apps with HTML5 and JavaScript – A Review
http://marxsoftware.blogspot.com/2013/11/book-review-developing-windows-store-apps.html
CC-MAIN-2017-34
en
refinedweb
[ ] Advertising Warren Falk commented on LUCENENET-593: --------------------------------------- I don't think that's quite it. Simply installing that package appears to have no effect. You can see that IndexWriter.SetDiagnostics() does the following: {code:none} diagnostics["os.arch"] = Constants.OS_ARCH; {code} And since Constants.OS_ARCH is this: {code:none} public static readonly string OS_ARCH = GetEnvironmentVariable("PROCESSOR_ARCHITECTURE", "x86"); {code} ...we will always get the value of the environment variable. This _is_ the underlying OS's architecture, so that's correct - but only in Windows. (Except also in .Net Core, there's this extra code in GetEnvironmentVariable()): {code:none} #if NETSTANDARD if (variable == "PROCESSOR_ARCHITECTURE") { return RuntimeInformation.OSArchitecture.ToString(); } #endif {code} This has no effect on me because Mono is .Net Framework not .Net Standard (I am targeting net452, not any netstandard). But I think this block is what you're talking about. If I remove the #if block, then install that package to the dependencies in Lucene.Net.project.json, then the problem is resolved. So perhaps this is the resolution you are suggesting. Let me know if this is a good solution and I'll submit a PR Keep in mind that this changes the actual stored value of the os_arch to X64 instead of AMD64 (for .Net Framework, it would already have been X64 in .Net Standard). Also, the net451 version of Lucene.Net otherwise has no dependencies (this will be the first)... if that matters. > NullReferenceException in Linux > ------------------------------- > > Key: LUCENENET-593 > URL: > Project: Lucene.Net > Issue Type: Bug > Components: Lucene.Net Core > Affects Versions: Lucene.Net 4.8.0, Lucene.Net 5.0 PCL > Environment: Linux (ubuntu 64 bit, maybe all linux variants) > Reporter: Warren Falk > > A NullReferenceException on any attempt to query in Linux (ubuntu x64 in my > tests) > I was able to track this down to the following line in Constants.cs > {{public static readonly string OS_ARCH = > GetEnvironmentVariable("PROCESSOR_ARCHITECTURE", "x86");}} > Sure enough, PROCESSOR_ARCHITECTURE is not set by default in ubuntu server > x64 and when I set it, there is no failure. > I would fix this, but I'm not sure what this value is used for, so I am not > sure what the appropriate behavior should be. Should we try to find the > correct architecture here? And what is the correct string, "amd64", "x64", > "x86_64"? Or do we really just want to know the value of that if it is > actually not set, should we leave it blank? The issue is that the InfoWriter > can't write a null, but it could write an empty string. > I'll submit a pull request if anyone can tell me what correct behavior should > be. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
https://www.mail-archive.com/dev@lucenenet.apache.org/msg03751.html
CC-MAIN-2017-34
en
refinedweb
Python wrapper for NVidia Cg Toolkit What is python-cg? python-cg is a Python wrapper for NVidia Cg Toolkit runtime. I’ve started it because I like Python, I like NVidia CG and I want to to do some computer game/3d graphicsprototyping and research. Also I still find C++ counterproductive as far as my needs are concerned and I don’t want to waste my time doing boring stuff. Programming in Python is fun. I know about some projects that were meant to bring CG to Python but as far as I know they’re history now. Project is hostead at GitHub:. What’s the state? The project is in very early development stage. Overview of what’s supported right now: - Cg contexts - creating - destroying - CgFX effects - creating from file - creating directly from source code - accessing effects` techniques and their passes - accessing effect parameters with their names, semantics and parameter-specific metadata (rows, columns etc.) - setting sampler parameters and most of numerical parameters What doesn’t work at the moment and there’s no plan to implement it: - everything that’s left (well, until I decide I need some of it or someone else does that) Requirements This project requires: - NVidia Cg Toolkit ≥ 3.0 - Python interpreter (+ development files): - 2.x ≥ 2.6, or - 3.x ≥ 3.2 - C and C++ compiler Python packages required to build and install python-cg: - Cython ≥ 0.18 - numpy To build documentation/run tests you also need: - Mock ≥ 1.0 - Nose ≥ 1.2 - Sphinx ~ 1.2 (development version) Documentation Pregenerated documentation can be found at. You can also build documentation all by yourself by calling: sphinx-build -b html docs docs/build/html Generated HTML files are placed in docs/build/html/ directory. Building To build the project in place, run: python setup.py build_ext --inplace Important information - This project works with OpenGL and OpenGL only - It uses row-major matrices by default, just like numpy does Quickstart First you need to create an instance of CG class and use it to create new Context: from cg import CG cg = CG() context = cg.create_context() We want to use an effect to render some stuff so we’re gonna create Effect from file: effect = context.create_effect_from_file('effect.cgfx') Note This assumes that you have a file named effect.cgfx and that it contains a valid CG effect. We now have access to Effect’s techniques and parameters: for technique in effect.techniques: # ... for parameter in effect.parameters: # ... For the sake of simplicity let’s say we have a parameterless effect with only one Technique: technique = effect.techniques[0] Now we can access technique’s passes. Each Pass has methods begin() and end() and the actual drawing has to take place between a call to begin and end: gl.glClear(gl.GL_COLOR_BUFFER_BIT) for pass_ in technique.passes: pass_.begin() gl.glBegin(gl.GL_TRIANGLES) gl.glVertex3f(-0.5, -0.5, 0) gl.glVertex3f(0.5, -0.5, 0) gl.glVertex3f(0, 0.5, 0) gl.glEnd() pass_.end() # swap buffers You can find complete, runnable example application in example directory. Please note that it requires (in addition to python-cg requirements): Development version of SFML 2 Python packages listed in example/requirements.txt: pip install -r example/requirements.txt Then to run the example: python setup.py build_ext --inplace PYTHONPATH=. python example/main.py Testing To run tests, execute: python runtests.py Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/python-cg/
CC-MAIN-2017-34
en
refinedweb
You’ve already got a first target system installed, and now you’ve written some new code and want to deploy it. This article will show you how to setup make and fab commands that use reltools to build & install new releases. Appup Your code should be part of an OTP application structure. Additionally, you will need an appup file in the ebin/ directory for each application you want to upgrade. There’s a lot you can do in an appup file: - reload a module - add or delete a module - update a running process - and lots more. Refer to the Appup Cookbook and appup reference manual for more details. Once you’ve updated app files with the newest version and configuration and created appup files with all the necessary commands, you’re ready to create a new release. Note: The app configuration will always be updated to the newest version, even if you have no appup commands. Release To create a new release, you’ll need a new rel file, which I’ll refer to as NAME-VSN.rel. VSN should be greater than your previous release version. My usual technique is to copy my latest rel file to NAME-VSN.rel, then update the release VSN and all the application versions. Note: reltools assumes that the rel file will be in $ROOTDIR/releases/, where $ROOTDIR defaults to code:root_dir(). This path is also used below in the make and fab commands. You can pass a different value for $ROOTDIR, but releases/ is hard coded. This may change in the future, but for now your rel files must be in $ROOTDIR/releases/ if you want to use reltools. Reltools Before you finalize the new release, make sure reltools is in your code path. There 2 ways to do this: - Make a copy of reltools and add it to your application. - Clone elib and add it to your code path with erl -pa PATH/TO/elib/ebin. If you choose option 1, be sure to include reltools in your app modules, and add it to your appup file with {add_module, reltools}. But I’ll assume you want option 2 because it provides cleaner code separation and easier release handling. Keeping elib external means you can easily pull new code, and only need to add the elib application to your rel file with the latest vsn. Make Upgrade Now that you have a new release defined, and elib is in your code path, you’re ready to build release upgrade packages. Below is the make command I use to call reltools:make_upgrade("NAME-VSN"). Be sure to update PATH/TO/ to your particular code paths. ERL=erl # use default erl command, but can override path on command line src: FORCE @$(ERL) -pa lib/*/ebin -make # requires an Emakefile upgrade: src @$(ERL) -noshell \ # run erlang with no shell -pa lib/*/ebin \ # include your local code repo -pa PATH/TO/elib/ebin \ # include elib -pa PATH/TO/erlang/lib/*/ebin \ # include local erlang libs -run reltools make_upgrade $(RELEASE) \ # run reltools:make_upgrade -s init stop # stop the emulator when finished FORCE: # empty rule to force run of erl -make Using the above make rules, you can do make upgrade RELEASE=PATH/TO/releases/NAME-VSN to build a release upgrade package. Once you can do this locally, you can use fab to do remote release builds and installs. But in order to build a release remotely, you need to get the code onto the server. There are various ways to do this, the simplest being to clone your repo on the remote server(s), and push your updates to each one. fab release build install Below is an example fabfile.py for building and installing releases remotely using fab. Add your own hosts and roles as needed. PATH/TO/TARGET should be the path to your first target system. release is a separate command so that it you are only asked for NAME-VSN once, no matter how many hosts you build and install on. build will run make upgrade RELEASE=releases/NAME-VSN on the remote system, using the target system’s copy of erl. Theoretically, you could build a release package once, then distribute it to each target system’s releases/ directory. But that requires each target system being exactly the same, with all the same releases and applications installed. If that’s the case, modify the above recipe to run build on a single build server, have it put the release package into all the other node’s releases/ directory, then run install on each node. install uses _rpcall to run rpc:call(NODE@HOST, reltools, install_release, ["NAME-VSN"]). I’ve kept _rpcall separate so you can see how to define your own fab commands by setting env.mfa. from fabric.api import env, prompt, require, run env.erl = 'PATH/TO/TARGET/bin/erl' def release(): '''Prompt for release NAME-VSN. rel file must be in releases/.''' prompt('Specify release as NAME-VERSION:', 'release', validate=r'^\w+-\d+(\.\d+)*$') def build(): '''Build upgrade release package.''' require('release') run('cd PATH/TO/REPO && hg up && make upgrade ERL=%s RELEASE=releases/%s' % (env.erl, env.release)) def install(): '''Install release to running node.''' require('release') env.mfa = 'reltools,install_release,["%s"]' % env.release _rpccall() def _rpccall(): require('mfa') evalstr = 'io:format(\"~p~n\", [rpc:call(NODE@%s, %s)])' % (env.host, env.mfa) # NOTE: local user must have same ~/.erlang.cookie as running nodes run("%s -noshell -sname fab -eval '%s' -s init stop" % (env.erl, evalstr)) Workflow Once you’ve updated your Makefile and created fabfile.py, your workflow can be something like this: - Write new application code. - Update the app and appup files for each application to upgrade. - Create a new rel file as releases/NAME-VSN.rel. - Commit and push your changes. - Run fab release build install. - Enter NAME-VSNfor your new release. - Watch your system hot upgrade in real-time 🙂 Troubleshooting Sometimes reltools:install_release(NAME-VSN) can fail, usually when the release_handler can’t find an older version of your code. In this case, your new release will be unpacked but not installed. You can see the state of all the known releases using release_handler:which_releases().. This can usually be fixed by removing old releases and trying again. Shell into your target system and do something like this (where OLDVSN is the VSN of a release marked as old): See the release_handler manual for more information. release_handler:remove_release("OLDVSN"). % repeat as necessary release_handler:install_release("VSN"). release_handler:make_permanent("VSN").
https://streamhacker.com/2009/07/15/erlang-release-handling-with-fab-and-reltools/
CC-MAIN-2017-34
en
refinedweb
Talk:EclipseLink/Development/Indigo/Multi-Tenancy - @TenantShared seems much too complicated, why would the user ever require multiple tenant ids in the same Entity? - Would be better to support the simple case, and allow the user to use AdditionalCriteria if they have more complex requirements. - Also, why have the property specified in TenantId and have the nested Column, why not just have a fixed property name "eclipselink.tenant" and have a TenantColumn similar to DiscriminatorColumn? - Will need someway to know the type of the TenantId, similar to the DiscriminatorType, or just maybe have a type which is a Class. - Instead of changing every buildRow method to specifically include the tenant column it would be nice to have a more generic mechanism, that can be used for other purposes or more advanced tenant requirements. We could add a special mapping to the descriptor that writes to the column, or have some sort of generic ExtensionPolicy or set of policies on the descriptor that have hooks to support Tenants, SoftDeletes, Auditing, History, etc. - James.sutherland.oracle.com 15:08, 2 March 2011 (UTC) - The requirement for multiple Tenant Ids is that not all tenants may use the same column or that multiple columns represent the tenant id. Additional Criteria would not allow for updating those fields. We may want to change the annotation so that "TenantId" can be used directly and use "TenantShared" as the complex scenario or follow the JPA pattern and use "TenantIds". This would simplify the simple case. : @Entity @Table(name="EMP"); @TenantId(property="tenant-id") public class Employee { @Entity @Table(name="EMP"); @TenantIds( { TenantId(property="tenant-id"), TenantId(property="tenant-id2") } ) public class Employee { - Each Tenant should be given their own ServerSession. This should be doable by augmenting the session name or actually updating the code to store the ServerSession by TenantId as well. - Gordon Yorke 16:15, 3 March 2011 (UTC) - The default property name must be prefixed with "eclipselink" as all of our persistence unit properties are, "eclipselink.tenant.id" - The example shows passing a cache setting to creating an EntityManager, this is not possible. The cache must be disabled for the whole persistence unit. - An admin persistence unit code be build by just removing the tenant-id in a session/descriptor customizer. - James.sutherland.oracle.com 13:47, 8 March 2011 (UTC) - We should be managing the cache automatically for users. Unless explicitly overridden using a SessionName or disabling of the cache for each unique set of properties a new Session should be created and used. - There should be no EntityManager level cache settings. The cache is not managed at the client level, it is managed at the Session configuration level. If EntityManager level configuration of tenants is required then we should find an EMF that corresponds to the properties unless cache has been disabled. Switching of Tenant properties in a live EM should not be allowed. - One of the requirements is to support mapped TenantIds but there are no details on how that would be configured. Is it a matter of EclipseLink just detecting the mutliple writable fields, or with TenantId be applied to the mapping? Gordon Yorke 16:16, 8 March 2011 (UTC) - What is the value of the @Multitenant annotation. Would @TenantDescriminator not give the same value. If we need a new type of multitenancy, we could add a new annotation. (same question about the xml values) - Have we specified TABLE_PER_TENANT at all? Should this be a future enhancement, or a design feature in this doc? --Tom.ware.oracle.com 15:21, 16 March 2011 (UTC) Another issue is with Container Managed EMs and EMFs (ie. Injection). Properties are defined statically when using injection and updating properties after the injection has occurred has major concurrency implications. Likely we will want to have some sort of callback mechanism that is statically registered and would access the Current Context or a ThreadLocal to provide the Tenant-Id although there may be no guarantee that the injection thread is the calling thread but in practice it should be. - Gordon Yorke 18:12, 30 March 2011 (UTC) TABLE_PER_TENANT Strategy Is the TABLE_PER_TENANT available in EclipseLink 2.3?
http://wiki.eclipse.org/index.php?title=Talk:EclipseLink/Development/Indigo/Multi-Tenancy&oldid=260593
CC-MAIN-2014-49
en
refinedweb
Introduction Eclipse is an ideal platform for developing Web applications using Java™ technologies. The 3-tier design for building dynamic Web applications is well suited for use with JSPs and Servlets running in a servlet container like Apache Tomcat. The persistent data layer is aptly provided by the Derby database. The Eclipse Web Tools Platform (WTP) project set of tools for developing J2EE™ and Web applications, along with the Derby Eclipse plug-ins, allows for rapid and simplified Web development. This article discusses some of the functionality provided by the WTP, the Derby database plug-ins, and a complete sample application that uses JSPs, JavaServer Pages Standard Tag Library (JSTL), and Servlets. The sample application is a fictitious and simplified airline flight reservation system. In order to get the most out of this article, you should understand the basics of JSP, JSTL, and Servlet technology, understand simple SQL, and have some knowledge of Eclipse. Some of the features of the WTP are used in this article, but it is not a comprehensive tutorial of the WTP tools. To learn more on these topics please refer to the list of resources at the end of this article. If you already know some of the background of the WTP and want to get started downloading all of the required software, skip to the section Software requirements. Otherwise, read the next section to learn what the WTP is and how some of these components are used from within Eclipse to develop the sample application. IBM Cloudscape™ is the commercial release of the Apache Derby open source database. The names are used interchangeably in this article for general terms, unless reference to a specific file or name is used. Eclipse WTP project The Eclipse Web Tools Platform (WTP) project allows Eclipse users to develop J2EE Web applications. Included in this platform are multiple editors, graphical editors, natures, builders, a Web Service wizard, database access and query tools, and other components. The project provides a large number of tools. Only a limited number of these tools will be demonstrated in relation to building a Web application using Derby as the back-end database. The WTP's charter, as defined at, is: "... to build useful tools and a generic, extensible, standards-based tool platform upon which software providers can create specialized, differentiated offerings for producing Web-enabled applications." This article will not discuss building new tools for this platform, but instead as an open platform to build Web applications using Open Source components. Web standard tools and J2EE standard tools The WTP is divided into two sub-projects, the Web Standard Tools and the J2EE Standard Tools. The Web Standard Tools (WST) project provides common infrastructure that targets multi-tier Web applications. It provides a server view that allows you to publish resources created from within Eclipse and run them on a server. The WST does not include specific tools for the Java language, or for Web-framework-specific technology. The J2EE Standard Tools (JST) project provides tools that simplify development for the J2EE APIs, including EJB, Servlet, JSP, JDBC™, Web Services, and many more. The J2EE Standard Tools Project builds on the support for the Server Tools provided by the Web Standard Tools Project, including servlet and EJB containers. The next section discusses all the software components you need in order to build and run the sample application. Components of the Web application The sample application uses the following software components and technologies: - Eclipse - Use the IDE to write and run the sample application. It is the foundation for developing and building Java applications. - Use the Java Development Tools (JDT) included with Eclipse to compile the Java classes that are part of the application. - WTP - Use the editor to create the JSP files. The editor includes content assist for JSP syntax. - Use the Servers view to start and stop the external Tomcat servlet engine. - Use the J2EE perspective view to create the Dynamic Web application that assembles and configures the J2EE Web application, including the standard structure and deployment descriptor common to all J2EE Web applications. - Create a connection to a Derby database through the Database Explorer view. - Derby plug-ins - Add the Derby nature to the Dynamic Web project to include the JAR files in the project. - Start and stop the Derby network server during application development. - Test and run SQL queries using the ij SQL query tool. - Set the project's derby.system.home property to point to the database. - JavaServer Pages Standard Tag Library (JSTL) - This tag library enables JSP-based applications to use a standard tag library to perform common tasks. The sample application uses these tags to perform tasks like iteration and database access. Expression Language (EL), a scripting language, is also used in the JSPs. - Apache Tomcat servlet engine - Runs the Web application consisting of the JSPs and Servlets. - Provides support for the Servlet 2.4 and JSP 2.0 APIs, including support for EL. - Provides support for defining the Derby database as a Data Source in the deployment descriptor of the Web application. Software requirements The software described in this section is available for download at no cost and must be installed prior to running the examples and building the sample Web application. Either of the following Java development kits (Tomcat 5.5 requires at least 1.5): - IBM SDK version 1.5.x or higher. - Sun JDK version 1.5.x or higher. Eclipse and WTP. You can download one zip that contains the Eclipse SDK, all of the WTP prerequisites, and WTP itself. On Windows®, this file is called wtp-all-in-one-sdk-1.0-win32.zip. If you prefer to download the Linux® distribution, grab the file wtp-all-in-one-sdk-1.0-linux-gtk.tar.gz. You can download either of these files from eclipse.org (see Resources). In case you already have Eclipse installed, or some of the prerequisites for WTP, the versions of the plug-ins contained in the wtp-all-in-one zip file are listed below for you to compare. Also, the Download page lists these prerequisites for 1.0 WTP, so you need to make sure your components are at least as current as those listed here. If the versions available for download are different from those listed below, get the version recommended at the WTP site. - Eclipse 3.1.1, for Windows: eclipse-SDK-3.1.1-win32.zip - EMF SDK: emf-sdo-xsd-SDK-2.1.1.zip - GEF SDK: GEF-SDK-3.1.1.zip - Java EMF Model Runtime: JEM-SDK-1.1.0.1.zip - Web Tools Platform: wtp-1.0.zip Derby database Eclipse plug-ins (available from apache.org as zip files -- see Resources). The Apache Derby database recently released version 10.1 of the database engine. The following versions of the plug-ins are required to run on Eclipse 3.1: - Derby Core plug-in, Version 10.1.1 or higher (10.1.2 is recommended) - Derby UI plug-in, Version 1.1.0 Apache Tomcat. Download Version 5.5.15 (see Resources). JavaServer Pages Standard Tag Library (JSTL). Download the Standard 1.1 Taglib, jakarta-taglibs-standard-1.1.2.zip, from the Apache Jakarta Project (see Resources). The sample application source code and WAR file: - A WAR file is a Web Application Archive and is the standard unit for packaging and deploying J2EE Web applications. All J2EE-compliant servlet containers accept WAR files and can deploy them. Download the LowFareAir.war file to your file system (see Downloads). - Download the zip file LowFareAirData.zip, which contains the Derby database and sample SQL files to access the airlinesDB database. (See Downloads.) Software configuration After downloading all required components, you need to configure them so you can start building the application. Install a JDK If you do not have a Version 1.5.x or higher JDK, install it. A JDK is required, not just a JRE. Install Eclipse and the WTP Install Eclipse by unzipping the wtp-all-in-one-sdk-1.0-win32.zip into a directory where you want Eclipse to reside. If Eclipse is already installed and you downloaded the individual components listed above, unzip those in the Eclipse home directory, since they are all plug-ins and they will install to the plugins directory of Eclipse. Install and configure Jakarta Tomcat Unzip or install Jakarta Tomcat to a different directory than your Eclipse installation directory. Now you need to configure Eclipse to run Tomcat as a server from within Eclipse using the WTP. To do this: - Go to and select the tutorials link under the WTP Community section. - From the Tutorials page, select the tutorial called "Building a School Schedule Web Application." - Follow all the instructions in the section called "Installing the Tomcat Runtime in Eclipse" for this tutorial. However, select a Tomcat 5.5.15 installation instead of the 5.0.28 install like the instructions say. For the purposes of this sample Web application, you do not need to complete the whole tutorial. Install the Derby plug-ins Unzip both of the files (the Derby Core and UI plug-in zip files) to the plugins directory, from the Eclipse home directory. The Derby plug-ins come with a complete tutorial and examples of how to use all of their functionality. To access the help, select Help > Help Contents > Derby Plug-ins User Guide. Application design The LowFareAir Web application follows the standard 3-tier design model consisting of a presentation layer, business logic and control layer, and a data or persistence layer. The JSPs, including the JSTL tag libraries, provide the UI or presentation layer. The Servlets and supporting Java classes provide the business logic and control the flow of the application. The Derby database and JavaBeans provide the data layer. The diagram below illustrates this. Figure 1. Sample application design A note about accessing the data layer from the presentation layer The presentation layer, represented by the JSPs, should generally not interact directly with the data layer and hence should not be making database queries. The design of this application follows the accepted paradigm, with the exception of the first JSP. For quick prototyping efforts it is acceptable to combine the database access in the view layer, compromising the strict separation of data from view. The first JSP, Welcome.jsp, occupies both the presentation layer and the data layer by using the JSTL SQL library to issue an SQL query from the page. The other JSPs act as the presentation layer only, and pass all data handling responsibility to the Servlets, which interact with the Derby database. The JSTL SQL library example is shown here, in case you are interested in using this methodology for future prototyping of Web applications, but it is not recommended for production environments. LowFare Air sample application The sample application allows new users the chance to register for or existing users to log into the application. Once the user is logged in, numerous flights are presented for booking. Only direct flights are offered, so the flight chosen is checked to see if the origin and destination direct flight is available. If the flight is available the user can choose to book the flight. Finally, the user can see a history of all flights booked through LowFare Air. The flow of the sample application consists of these steps: - User registration or verification - The JSPs used in this part of the application are Welcome.jsp, Login.jsp, and Register.jsp - LoginServlet acts as the controller -- the user's name is either verified in the APP.USERS table in the Derby database, or inserted into the table. - A persistent cookie is set once a successful registration occurs, and the client's user ID is added to the session once a successful login occurs. - Flight retrieval and selection - Welcome.jsp is used to select the flight, and GetFlights.jsp is used to retrieve the flight. - CheckFlightsServlet acts as the controller. If there are flights between the two selected cities, the flight information is passed to GetFlights.jsp. If not, the user is returned to Welcome.jsp to select another flight. - If there are flights, the DerbyDatabase class places the flight information retrieved from the database into a JavaBean called FlightsBean. - Book the user's flight by updating the Flight History - The JSPs used are BookFlights.jsp and GoodBye.jsp. BookFlights.jsp asks the user for final confirmation on the flight they want to book. GoodBye.jsp displays all flights booked for the user with Derby Airlines. - UpdateHistoryServlet updates the APP.FlightHistory table with the users name and flight they just booked. The request is then forwarded to GoodBye.jsp. - Log out user - The final phase of the application is to either log out or to book another flight. The JSPs used are LoggedOut.jsp or, if the user wishes to book another flight, Welcome.jsp. - If the user chooses to log out the user ID is removed from the Session object. Therefore the next time the user returns to the site a persistent cookie remains, but the user ID is no longer in the Session object, and therefore the user must log in again. The figure below shows this same flow pictorially. Figure 2. Sample application flow Creating the Web project from the WAR To understand how to use the various tools included with the WTP and the Derby plug-ins, import the application as a WAR file, the standard packaging unit for Web applications, which is in a JAR file format. The first step towards building any Web application that uses JSPs or Servlets is to create a Dynamic Web Application. You can create one using the WTP set of tools, and it will automatically create the correct directory structure for J2EE Web applications. Import the WAR file into the Dynamic Web Project folder of the Project Explorer view to create a new Web project. Launch Eclipse, if it is not already running, and import the WAR file to create a new Dynamic Web Project by following these steps: - Open the J2EE Perspective. - From the Project Explorer view, right-click the Dynamic Web Projects folder. - Select Import, then in the Import window, select WAR file and click Next. - In the WAR Import window, browse to the LowFareAir.war file you downloaded earlier (see Software Requirements above). Name the project LowFareAir, and make sure the Target server is Apache Tomcat V5.5 (which you configured earlier -- see Software configuration above). Click Finish. Figure 3 shows the last step in the process. Figure 3. Importing a WAR file to create the Dynamic Web Project You also need to import three JAR files that are not included in the WAR file: jstl.jar and standard.jar from the Jakarta taglibs package you downloaded earlier, and the derbyclient.jar file from the Derby core plug-in. Normally a complete WAR file would contain these JAR files, but for demonstration purposes you should know how to import them into the Dynamic Web Project. To retrieve the JAR files from the Jakarta package, unzip the jakarta-taglibs-standard-1.1.2.zip file. The jstl.jar and standard.jar files are located in the newly created jakarta-taglibs-standard-1.1.2/lib directory. To import them: - Open the Dynamic Web Projects folder. The LowFareAir project you just imported will appear. Expand this folder, and then the WebContent folder. - Right-click the WebContent/WEB-INF/lib folder and select Import. In the Import window, select File System, and then click Next. - Browse to the subdirectory jakarta-taglibs-standard-1.1.2/lib, where you unzipped the taglibs, and select jstl.jar and standard.jar. Make sure you are importing into the LowFareAir/WebContent/WEB-INF/lib directory. Then click Finish. Now you need to add the derbyclient.jar file to the libraries available to the Web application. Your Web application will use the JDBC driver in derbyclient.jar to make connections to the database. To import derbyclient.jar: - Right-click the WebContent/WEB-INF/lib folder and select Import. In the Import window, select File System, then click Next. - Browse to the plugins directory under the Eclipse home directory, and then to the directory org.apache.derby.core_10.1.2. Select derbyclient.jar. Make sure you are importing into the LowFareAir/WebContent/WEB-INF/lib directory. Then click Finish. This completes importing the Web components, including the Java source files and all libraries for the application. Next, import the Derby database airlinesDB, complete with sample data. Configuring the data layer To configure the data layer and the tools that access the database for your application: - Add the Apache Derby Nature to the LowFareAir project. - Import the LowFareAirData.zip file into the project. The zip file contains the airlinesDBDerby database, which contains all of the data for the application as well as some sample SQL scripts. - Configure the Web application deployment descriptor, web.xml, to contain a data source pointing to the airlinesDBdatabase. - Set the Derby property derby.system.hometo point to the full path of the airlinesDBdatabase. By setting this property, all references to the airlinesDBdatabase in the JDBC connection URLs can just refer to 'airlinesDB' instead of the full file system path. Adding the Apache Derby nature The Web application uses a Derby database to store and query flight information for the fictional LowFareAir airlines. An easy way to access and use Derby databases in Eclipse is via the Derby plug-ins. The Derby plug-ins allow you to add the Derby nature to any Eclipse project. Adding a nature to a project, including a Dynamic Web Project, means that project "inherits" certain functionality and behaviour. Adding the Derby nature adds the Derby database JAR files and the command line tools bundled with Derby to the Eclipse environment. The Project Explorer view will now show the LowFareAir project you just created. To add the Derby nature to the LowFareAir project, right-click it and select the menu item Apache Derby > Add Apache Derby nature. Import LowFareAirData.zip Included in the source code is airlinesDB, the sample database used for the Web application. This database, along with some sample SQL, needs to be imported into the LowFareAir project. To do this: - Expand the Dynamic Web Projects folder. Right-click the LowFareAir folder and select Import. In the Import window, select Archive file, then click Next. - Browse to LowFareAirData.zip, and make sure that in the left frame, the / directory is checked. This includes the data and sql folders. For the name of the Into folder select LowFareAir, then click Finish. - If the import was successful, the LowFareAir folder should contain two new subfolders: data and sql. The data folder will contain the airlinesDB directory (database). - The sql directory contains three SQL files called airlinesDB.sql, flights.sql, and flighthistory_users.sql. Now all of the files required for the Web application have been imported, and the LowFareAir project should look similar in structure to Figure 4. Figure 4. Project Explorer view of LowFareAir project Configure web.xml with the Derby data source The web.xml file contains a data source entry to use the Derby airlinesDB database as a data source. This is not a requirement for any Web application to connect to a Derby database; however, the application uses a data source for the first JSP page for demonstration purposes. The other JSPs do not use the data source, and the Servlets use a standard Java class that use JDBC to connect to the database. To determine the location of the airlinesDB on the file system, right-click the airlinesDB folder under the data directory of the LowFareAir project and select Properties. The Properties window shows a Location field which has the full file system path to the airlinesDB directory. Copy this string so it can be used in the next step. For instance, this path might be something like: C:\eclipse\workspace\LowFareAir\data\airlinesDB Open web.xml (under the WebContent/WEB-INF directory) and browse to this section of it while viewing in Source mode (the entry in the param-value section has been modified with a line break for readability, but the URL should be a continuous line): Listing 1. Web.xml context-param section <context-param> <param-name>javax.servlet.jsp.jstl.sql.dataSource</param-name> <param-value> jdbc:derby://localhost:1527/C:/eclipse/workspace/LowFareAir/data/ / airlinesDB;user=a;password=b;, org.apache.derby.jdbc.ClientDriver </param-value> </context-param> Change the value in the <param-value> section to the database URL for your environment, using the full path to the airlinesDB database you just copied and save the file. Without editing this correctly, the first page of the application (Welcome.jsp) will fail. Also, you need to start the network server before running Welcome.jsp, since the URL shown above attempts to access the Network Server using the Derby Client driver. Note: Either forward or backward slashes can be used for the database connection URL in a Windows environment. Set derby.system.home for the project The next step to configuring the Derby database environment from within Eclipse is to edit the Derby system property called derby.system.home to point to the location of the airlinesDB database. This allows you to connect to airlinesDB using the Derby plug-ins without specifying the full file system path to the database. Only the name airlinesDB needs to be listed in the database connection URL. Use the path to the airlinesDB directory you copied earlier, just slightly modified, to set derby.system.home. - Right-click the LowFareAir project and select Properties. - In the left side of the Properties window for the LowFareAir project, select Apache Derby. - On the right side are the Apache Derby properties that you can change. The Derby System Property called derby.system.homeis currently set to the default value (.). Change it to point to the full path of the directory where the airlinesDB directory resides. Note: You can also modify the port that the network server listens on, in the portproperty. Edit the value for the derby.system.homeproperty to the full path of your data directory. Paste in the string you copied above, and remove the trailing \airlinesDB. So given the example path from earlier, the derby.system.home property would be: C:\eclipse\workspace\LowFareAir\data. Note: Do not enter the name of the database directory itself -- it should be the directory where the database directory resides, in this case data, not the airlinesDB directory itself. - Finally click OK to save the setting for the project. Next you'll start the Derby Network Server, make a connection to the airlinesDB database and issue some SQL using the ij tool provided with the Derby plug-ins. Starting the Derby network server and running ij Since you are about to run some queries against the tables in the airlinesDB, it's useful to know which tables you have and how they are defined. These are shown below. The SQL file, airlinesDB.sql, was used to create the database. Do not run airlinesDB.sql again unless you delete the old database and want to recreate all of the tables in a new database. Listing 2. Create table statements for the airlinesDB database CREATE TABLE APP.CITIES ( CITY_ID INTEGER NOT NULL constraint cities_pk primary key, CITY_NAME VARCHAR(24) NOT NULL, COUNTRY VARCHAR(26) NOT NULL, AIRPORT VARCHAR(26), LANGUAGE VARCHAR(16), COUNTRY_ISO_CODE CHAR(2) ); CREATE TABLE APP.FLIGHTS ( FLIGHT_ID CHAR(6) NOT NULL, SEGMENT_NUMBER INTEGER NOT NULL, ORIG_AIRPORT CHAR(3), DEPART_TIME TIME, DEST_AIRPORT CHAR(3), ARRIVE_TIME TIME, MEAL CHAR(1) CONSTRAINT MEAL_CONSTRAINT CHECK (meal IN ('B', 'L', 'D', 'S')), FLYING_TIME DOUBLE PRECISION, MILES INTEGER, AIRCRAFT VARCHAR(6), CONSTRAINT FLIGHTS_PK Primary Key (FLIGHT_ID, SEGMENT_NUMBER) ); CREATE TABLE APP.FLIGHTAVAILABILITY ( FLIGHT_ID CHAR(6) NOT NULL , SEGMENT_NUMBER INTEGER NOT NULL , FLIGHT_DATE DATE NOT NULL , ECONOMY_SEATS_TAKEN INTEGER DEFAULT 0, BUSINESS_SEATS_TAKEN INTEGER DEFAULT 0, FIRSTCLASS_SEATS_TAKEN INTEGER DEFAULT 0, CONSTRAINT FLIGHTAVAIL_PK Primary Key (FLIGHT_ID, SEGMENT_NUMBER, FLIGHT_DATE), CONSTRAINT FLIGHTS_FK2 Foreign Key (FLIGHT_ID, SEGMENT_NUMBER) REFERENCES FLIGHTS (FLIGHT_ID, SEGMENT_NUMBER) ); CREATE TABLE APP.FLIGHTHISTORY ( ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY, USERNAME VARCHAR(26) NOT NULL, FLIGHT_ID CHAR(6) NOT NULL, ORIG_AIRPORT CHAR(3) NOT NULL, DEST_AIRPORT CHAR(3) NOT NULL, BEGIN_DATE CHAR(12), CLASS CHAR(12) ); CREATE TABLE APP.USERS ( ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY, USERNAME VARCHAR(40) NOT NULL, PASSWORD VARCHAR(20) ); Now start the Derby network server by right-clicking the LowFareAir project and choosing Apache Derby > Start Derby Network Server. The console view should state the server is ready to accept connections on the port specified in the Derby properties Network Server settings. Open the sql folder and right-click the flights.sql file. Select Apache Derby > Run SQL Script using 'ij'. The console window will show the output of the three SQL statements contained in the flights.sql file. If a connection was not made, check to make sure the network server has been started, and that derby.system.home has been set to the full path to the data directory under the LowFareAir folder. The WTP data tools -- An alternative The WTP has a rich set of database tools that allow users to connect to, browse, and issue SQL against Derby, DB2®, Informix®, MySql, Oracle, SQL Server, and Sybase databases. In this section you'll connect to the Derby Network Server using the Derby Network Client driver, and learn how to use some of these tools as an alternative to using the Derby plug-ins. In the J2EE perspective, select Window > Show View > Other. In the Show View window, select Data > Database Explorer and then click OK. The Database Explorer view will appear in the bottom right part of the workspace. Right-click somewhere in this view, and select New Connection. The wizard that appears is the New Connection wizard. Uncheck the Use default naming convention checkbox, and name the connection Derby10.1. Under the Select a database manager section, expand the Derby item in the tree and select the 10.1 version of the database system. In release V10.1 of Derby, a new open source client driver -- derbyclient.jar -- is the recommended way to connect to the network server. The image below shows the values for each field in my environment. The table following also lists the sample settings. Figure 5. The New Connection wizard of the WTP Database Explorer Configuring authentication for Derby databases is available; however, this has not been set up for the airlinesDB database. Use any (non-null) value for user ID and password, and click the Test Connection button. If the network server is running on port 1527, the test should be successful. If it is not, make sure the network server is running on the port specified in the connection URL, and check to make sure all of the values are appropriate for your environment. Since the network server is used to connect to the airlinesDB database, multiple JVMs can access it. This means ij, the Database Explorer, and a client application external to Eclipse could all connect to and query the tables in the database. Once the test connection is successful, click the Next button. In the Specify Filter window, uncheck the Disable filter checkbox, select Selection and check the APP schema. Then click Finish. Figure 6. Specifying the filter for the Derby 10.1 connection Now the Database Explorer view shows the connection to the airlinesDB database. Expand the tree and browse to the FLIGHTS table under the Tables folder for the APP schema. Right-click the FLIGHTS table, and select Data > Sample Contents. Figure 7. Sample contents in the Database Explorer The Data Output view will now appear with the rows in the FLIGHTS table. Figure 8. The Data Output view Another feature of the Database Explorer view is the ability to insert, delete, and update rows in tables. The Table Editor provides the ability to modify data in a table. If you want to add another row to the FLIGHTS table, right-click it in the Database Explorer view, and select Data > Edit. Enter a new row, putting values in each column as shown below. Notice on the FLIGHTS tab the asterisk that precedes the word FLIGHTS. This indicates the editor has been changed since the last time it was saved. Figure 9. The Table Editor To insert the row into the table, save the editor by selecting File > Save or using the shortcut, Ctrl + S. The Data Output view's Messages tab shows the data was inserted successfully. Figure 10. Successfully inserting a row into the FLIGHTS table Other features of the Database Explorer view include the ability to Extract and Load tables, open an SQL Editor to issue ad-hoc SQL, and a Generate DDL option that is very useful to generate SQL scripts that you can use to create entire database schemas or a subset of the schema. You can explore all of these options on your own by right-clicking on the schema object (a table, view, index, or schema) to see which options are available for that particular object. Exploring the View Layer and JSPs Now you can start looking at the JSPs in the application. Referring to the flow of the application in Figure 2, the first JSP page is Welcome.jsp. From the Project Explorer, open up the Welcome.jsp file under the WebContent folder. The code for this page is shown in sections below. Note: Some code listings may have a backslash character, "\", to indicate the line of code is continuous but has been formatted for readability. Listing 3. Importing the Core and SQL taglibs in Welcome.jsp <%@taglib uri="" prefix="c"%> <%@taglib uri="" prefix="sql"%> The first two lines include the taglib directives available to the JSP page, so that you can use the core and SQL JSTL tag libraries. The next section makes use of the core tag library to test if any cookies are set in the user's browser. If there are none, the JSP page is forwarded to Register.jsp. If there is at least one cookie, then a check is made to see if the cookie with the name of derbyCookie is set. If the derbyCookie exists then the next test is to check if the Session object contains a user ID. If not, the user is forwarded to Login.jsp to log in to the application. If the user does have a user ID in the Session object, the user is allowed to proceed and processing continues on the Welcome.jsp page. Listing 4. Testing if cookies are set in Welcome.jsp <HTML> <HEAD> <TITLE>Derby Airlines</TITLE> </HEAD> <BODY> <c:choose> <c:when <jsp:forward </c:when> <c:otherwise> <!-- if the derbyCookie has been set but the username is not in \ the session object --> <c:forEach <c:if <c:if <jsp:forward </c:if> </c:if> </c:forEach> </c:otherwise> </c:choose> The code below checks to see if the parameter called nodirectflights is set to true when this page is posted. If you look at the code further down, the action for the form is to post to the CheckFlightsServlet. If the CheckFlightsServlet does not have any direct flights based on the origin and destination the user selects, the parameter nodirectflights is set to true, the user is returned to this page, and this snippet of code will display the message about no direct flights being available. Listing 5. Displaying a message if direct flights are unavailable in Welcome.jsp <c:if <p> There were no direct flights between <font color="blue" size="5"> ${cityNameFrom}</font> and <font color="blue" size="5"> ${cityNameTo}</font>. <br> Please select another flight. </p> </c:if> You can use the JSTL core out tag to display the parameters passed to JSPs in either the session or request objects. After a successful login, the user ID is placed in the Session object, and displayed on the Welcome.jsp page when the user accesses this page. Also, notice the use of the jsp:include tag to include the CalendarServlet which displays the current month, day, and year, as well as a drop down box to select the departure date. Listing 6. After a successful login <H3> Welcome to Derby Air, <c:out!</H3> <p> Please proceed to select a flight. </p> <form action="CheckFlightsServlet" method="post"> <table width="50%"> <tr> <td valign="top"> <b>Departure Date:</b><br> <jsp:include <jsp:param </jsp:include> </td> Here is the section that makes use of the JSTL SQL library. Before using the sql:query tag the way it is shown below, you have already set up the DataSource in the web.xml file for the application. That entry will be discussed later. For now, notice how easy this tag is to use. The query is issued against the CITIES table to return the values contained in the city_name, country, and airport columns of the APP.CITIES table. The results of the query are put in a variable called cities, which is a javax.servlet.jsp.jstl.sql.Result object. The Result object has a method called getRows() that returns all rows contained in it. The rows are returned as an array of java.util.SortedMap objects. Listing 7. Using the SQL taglib <sql:query SELECT CITY_NAME, COUNTRY, AIRPORT FROM APP.CITIES ORDER BY \ CITY_NAME, COUNTRY </sql:query> Iteration over the rows contained in the cities Result object is shown below using the core forEach tag. For each iteration through the array, the variable named city contains a SortedMap object. The Expression Language allows you to access the value of each column in each row by referring to the column name in the specific SortedMap object representing that row in the database. Listing 8. Outputting the result of the SQL query <td> <b>Origin:</b><br> <select name="from" size="4"> <c:forEach <option value="${city.airport}"> ${city.city_name}, ${city.country}\ </option> </c:forEach> </select> <br><br> </td> </tr> The rest of the page is not shown here. It outputs the Destination drop down box the same way the Origin destination is generated above, and then provides a button for the user to submit the query to check for the flights between the Origin and Destination. Examining the flow of control and running the application Before you can run LowFare Air, you need to start the Derby Network Server, if it is not already running. Right-click the LowFareAir folder, and select Apache Derby > Start Derby Network Server. Now right-click the Welcome.jsp file in the Project Explorer view. Select Run As > Run On Server. Figure 11. Running Welcome.jsp on the Tomcat server This brings up the Run on Server wizard. Proceed through the wizard as follows: - For the Server's host name, select localhost. For the server type, expand the Apache folder and select Tomcat V5.5. For the server runtime, select Apache Tomcat V5.5. By default Tomcat version 5.5 or higher requires Java 5 (1.5) or higher. However, you can use Java 1.4 by following the instructions in RUNNING.txt included in the Tomcat distribution. In this example, I have a 1.5 version of the JDK installed. Tomcat will fail to start if you are not using a 1.5 version of Java, or you have not configured it to use 1.4.Next, check the Set server as project default checkbox. Click the Next button. - In the Add and Remove Projects window, verify that the LowFareAir project is listed in the Configured Projects area. If it is not, and it appears in the Available Projects category, move it to the Configured Projects category. Click Finish. This will start the external Tomcat server and launch a browser window to run the JSP on. For Windows, the default is to launch an internal browser from within Eclipse. On Linux it will default to the external browser. To configure the launch of an external browser: - Select Window > Preferences. - Select the General tree item, then Web Browser. - Select the Use external Web browser button, then choose from the browsers available in the list. - Click OK to set the preference. It is always a good idea to check your Web pages with different browsers. They do behave differently when rendering some HTML elements, as well as in their default handling of cookies. As described above, the first time you launch Welcome.jsp, it will redirect you to the Register.jsp page. The derbyCookie has not been set, so Welcome.jsp redirects you to Register.jsp to create a user ID and password. See Figure 12. Figure 12. Register.jsp with new user ID entry When you enter a user ID and password and click the Register New User button, the values are passed to the LoginServlet class. Open up this Java class now, located under the LowFareAir > Java Resources > JavaSource > com.ibm.sample folder and package structure. The doPost method first parses the incoming parameters, including the user ID and password you just set in Register.jsp. Listing 9. The doPost method of the LoginServlet class protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String user = request.getParameter("username"); String password = request.getParameter("password"); // Register.jsp set the parameter newuser String newUser = request.getParameter("newuser"); String loggedOut = request.getParameter("loggedOut"); Then it connects to the Derby database by calling the getConnInstance() method on the DerbyDatabase class. The getConnInstance method returns a singleton java.sql.Connection to the database. Listing 10. Setting the conn variable to the singleton Connection object of the DerbyDatabase class Connection conn = DerbyDatabase.getConnInstance(); The next section of code determines if the user is new or not, and either selects the user's ID from the database if they already exist, or adds it to the database if they do not. Listing 11. Processing new users in the LoginServlet class // the user is not new, so look up the username and password // in the APP.USERS table if (newUser == null || newUser.equals("")) { sql = "select * from APP.USERS where username = '" + user + "' and password = '" + password + "'"; String[] loginResults = DerbyDatabase.runQuery(conn, sql); // if the query was successful one row should be returned if (loginResults.length == 1) { validUser = true; } } // the user is new, insert the username and password into // the APP.USERS table else { sql = "insert into APP.USERS (username, password) values " + "('" + user + "', '" + password + "')"; int numRows = DerbyDatabase.executeUpdate(conn, sql); if (numRows == 1) { validUser = true; } } To verify that the user ID Susan was added to the APP.USERS table, bring up ij, the SQL tool from the Derby plug-in, to query the APP.USERS table. To bring up ij, right-click the LowFareAir folder, then select Apache Derby > ij (Interactive SQL). Connect to the airlinesDB database by issuing this connect statement from ij and running the query select * from APP.USERS; to see if the user ID you entered into your browser appears. In this example, the user ID is Susan. Note: Version 3.1 of Eclipse has different behaviour than previous versions in regards to where the cursor is in the console view. The cursor is always positioned at the beginning of the line. Although it looks a little odd, if you start typing at the ij prompt, the cursor will reposition to the correct location, right after the 'j' in ij. Listing 12. Connecting to the airlinesDB database connect 'jdbc:derby://localhost:1527/airlinesDB'; select * from APP.USERS; The output from ij is shown below. Figure 13. ij results from APP.USERS table Register.jsp passed those values correctly and LoginServlet.java inserted the user ID and password correctly into the table. The user ID and password of 'slc' and 'slc' were already in the table. The rest of the code in LoginServlet.java deals with incorrect user IDs and passwords, and then forwards the results to Welcome.jsp if everything is correct. The next image shows Welcome.jsp with an Origin of Albuquerque and a Destination of Los Angeles. Note that the user ID was passed from LoginServlet.java to Welcome.jsp for display. Figure 14. Welcome.jsp after successful login Welcome.jsp posts to CheckFlightsServlet.java. Open up CheckFlightsServlet.java. The notable thing about this servlet is that after parsing the incoming parameters, it calls the DerbyDatabase method origDestFlightList(). This method returns an array of FlightsBean objects. Listing 13. The CheckFlightsServlet class Connection conn = DerbyDatabase.getConnInstance(); FlightsBean[] fromToFlights = \ DerbyDatabase.origDestFlightList(conn, from, to); Once the FlightsBean array has been populated, CheckFlightsServlet places the results in the session with the variable name of fromToFlights. Listing 14. Placing the array of FlightsBean into the session object request.getSession().setAttribute("fromToFlights", fromToFlights); Now open the DerbyDatabase.java class to see what this method does. Some of the code has been slightly reformatted for ease of viewing. Listing 15. Examining the origDestFlightList method in the DerbyDatabase class public static FlightsBean[] origDestFlightList(Connection conn, \ String origAirport, String destAirport) { String query = "select flight_id, segment_number, orig_airport, " + "depart_time, dest_airport, arrive_time, meal, flying_time, miles," + "aircraft from app.flights where ORIG_AIRPORT = ? AND " + "DEST_AIRPORT = ?"; List list = Collections.synchronizedList(new ArrayList(10)); try { PreparedStatement prepStmt = conn.prepareStatement(query); prepStmt.setString(1, origAirport); prepStmt.setString(2, destAirport); ResultSet results = prepStmt.executeQuery(); while(results.next()) { String flightId = results.getString(1); String segmentNumber = results.getString(2); String startAirport = results.getString(3); String departTime = results.getString(4); String endAirport = results.getString(5); String arriveTime = results.getString(6); String meal = results.getString(7); String flyingTime = String.valueOf(results.getDouble(8)); String miles = String.valueOf(results.getInt(9)); String aircraft = results.getString(10); list.add(new FlightsBean(flightId, segmentNumber, startAirport, departTime, endAirport, arriveTime, meal, flyingTime, miles, aircraft)); } results.close(); prepStmt.close(); } catch (SQLException sqlExcept) { sqlExcept.printStackTrace(); } return (FlightsBean[])list.toArray(new FlightsBean[list.size()]); } The origDestFlightList method issues an SQL query using a PreparedStatement, and places the results in a FlightsBean[] array. One of the FlightsBean constructors is shown below. Listing 16. One of the FlightsBean constructors public FlightsBean(String flight_id, String segNumber, String origAirport, String depart_time, String destAirport, String arrive_time, String food, String flying_time, String mile, String jet) { flightId = flight_id; segmentNumber = segNumber; startAirport = origAirport; departTime = depart_time; endAirport = destAirport; arriveTime = arrive_time; meal = food; flyingTime = flying_time; miles = mile; aircraft = jet; } Once the origDestFlightList method populates the FlightsBean array, processing continues in the CheckFlightsServlet servlet and the results are forwarded to the GetFlights.jsp page. In the next section, you'll use the JSP debugger feature of the WTP to look at the values returned in the FlightsBean when selecting a flight. The next section will start the Tomcat server in debug mode, so go ahead and stop it now. To do this, select the Servers view tab in the lower right section of the workspace. Then right-click in the Tomcat server line and select Stop. Figure 15. Stopping the Tomcat server from the WTP Servers view Using the JSP debugger Stepping back from the code a minute, here is a summary of what you have seen by exploring and configuring the sample application: - WTP - Used the Database Explorer view to configure a connection to a Derby 10.1 database using the Derby Client driver. - Sampled the contents of the FLIGHTS table from the Database Explorer view. - Added a row to the FLIGHTS table from the Database Explorer view, using the Data > Open menu option. - Started the Tomcat server using the Run As > Run On Server option from a JSP file. - Opened and examined a JSP in the JSP editor. - Stopped the Tomcat server using the Servers view. - Derby plug-in - Added the Apache Derby nature to a Dynamic Web Project. - Configured Derby system properties using the Project Properties menu. - Ran entire SQL scripts via ij. - Issued SQL commands via ij. - Started and stopped the Derby Network Server. Now let's look at the JSP debugging capabilities of the WTP. At this point there is at least one valid user in the Derby airlinesDB APP.USERS table, and you may have added another. Before you run through the application again, set a breakpoint in the GetFlights.jsp page first, and then start the Tomcat server in debug mode. To set the breakpoint, open GetFlights.jsp, and right-click in the grey area to the left of the line of code which starts with <c:set var="myradiobutton". Select Toggle Breakpoints as shown below. Figure 16. Setting a breakpoint in GetFlights.jsp The breakpoint will appear as a blue dot in the grey area on the left. Now from the Project Explorer (making sure the Derby Network Server is still running), right-click Welcome.jsp and select Debug As > Debug On Server. The Tomcat server will now start up in Debug mode and bring up the Welcome.jsp page prompting for a user ID and password. Enter the user ID and password you entered previously, or a user ID value of slc with a password of slc. If you did not delete the cookie already set, you will be allowed to select a flight from Welcome.jsp without having to log in again. Not all of the flights listed in the Origin to Destination are direct. Select an Origin of Albuquerque and a Destination of Los Angeles, since that flight is direct. Then finish by clicking Submit Query. At this point, Eclipse should prompt you to switch to the Debug perspective. Confirm the perspective switch. When the debug perspective appears, the Variables view on the upper right of the workspace will appear. In the bottom left, the GetFlights.jsp editor view will also appear, showing where you set the breakpoint. The Variables view may not be populated with values immediately. You might need to go to the Debug view and select the thread that has been suspended. Once you expand the suspended thread and select GetFlights.jsp in the Debug view as shown below, the Variables view should be populated. In the Variables view, look for your core JSTL when tag. Expand the tree for the _jspx_th_c_when_0=WhenTag item. See Figure 17. Figure 17. Examining variables from the JSP debug perspective Notice the reference to the parent tag ChooseTag from within the WhenTag. Expand the parent ChooseTag, and contained within it is the parent ForEachTag. Expand that ForEachTag and finally expand the item variable which equals FlightsBean. If you have been successful in following this, your Variables view will look something like what is shown below. Figure 18. FlightsBean values in Debug mode This shows the values set in the FlightsBean object, which were selected with a departing airport of Albuquerque and a destination airport of Los Angeles. The JSP debugger can be extremely useful when troubleshooting Web applications, and in particular in examining variables as shown here. Now click the Resume button on the top left menu item, which looks like a green arrow, to step past the breakpoint. The browser will display the output of the GetFlights.jsp page. In the browser, select the available flight (flight number AA1111), and click the Book Flight button. The next page gives you one last chance to either book the flight or check for other flights. Go ahead and click the Book Flight button. The next page will appear similar to what is shown below. Figure 19. Flight History When you clicked Book Flight on the previous screen, a row was inserted into the APP.FLIGHTHISTORY table. Since you have learned how to use both ij and the Database Explorer, you can verify that the row was actually inserted into the table. At this point in the application, the user can either return to select a new flight, or log out of the application. Summary In setting up and configuring the WTP platform, you have used the external Tomcat server, debugged a JSP file, configured a connection to a 10.1 Derby database using the Database Explorer, and used the Database Explorer to browse a table and insert a row. You've learned about the use of the Derby plug-ins, including adding the Apache Derby nature, starting and stopping the Network Server, using ij to run SQL scripts and issue ad-hoc commands, as well as setting the derby.system.home property. You configured the web.xml file for the Web application to use Derby as a data source. Finally, you used the JSTL SQL tag library to issue queries against the Derby database you configured as a data source. I hope this article has provided a solid foundation for using the numerous tools available within WTP and the Derby plug-ins. Starting from these fundamentals, you can develop robust Web applications using Derby as the data store. Downloads Resources Learn - Derby: The Apache Derby site contains online documentation, downloads (source and binary), and integration information with other open source projects. Both a developers and users mailing list is available to subscribe to or browse as archives. - Derby Plug-ins: The Apache Derby Integration area contains a section, Eclipse Plug-ins, which has a brief presentation on the Derby plug-ins and a Lab on how to use them available for download. - JSTL: An excellent series of articles on developerWorks, A JSTL primer, is a great place to start learning about the JSTL. - WTP: The WTP Web site is host to all things WTP. In particular, the tutorials are helpful and subscribing to the newsgroup can help when problems are encountered. - The Cloudscape information center contains all of the Cloudscape documentation on-line. - developerWorks provides numerous technical articles related to Cloudscape. Get products and technologies - Eclipse WTP project: Download Eclipse 3.1.1 and WTP 1.0 as a single bundle. - Apache Derby Core and UI plug-ins: Download the Derby Core 10.1.2 and UI 1.1.0 plug-ins as zip files from the Distribution section. - Apache Tomcat: Download Apache Tomcat, version 5.5.15. - Apache Jakarta Project: Download the Apache Jakarta Standard 1.1 Taglibs..
http://www.ibm.com/developerworks/data/library/techarticle/dm-0509cline/index.html
CC-MAIN-2014-49
en
refinedweb
01 June 2010 07:54 [Source: ICIS news] SINGAPORE (ICIS news)--DSM Engineering Plastics has completed its takeover of Japan's Mitsubishi Chemical Corporation's (MCC) polyamide business, the Dutch producer said in a statement on Tuesday. The acquisition was part of an agreement, which enabled DSM to acquire MCC’s polyamide business in exchange for its polycarbonate business, the statement added. “Both businesses have an annual net sales level of approximately €90m ($111m) each. The parties have agreed not to disclose other financial details,” DSM said. The deal would also enable DSM to “expand its position in ?xml:namespace> A number of key employees of DSM's polycarbonate business were transferred to MCC as part of the deal, it added. ($1 = €0.81) For more on polyamide, polycarbon
http://www.icis.com/Articles/2010/06/01/9363550/dsm-completes-takeover-of-mitsubishis-polyamide-business.html
CC-MAIN-2014-49
en
refinedweb
29 August 2012 08:34 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The new ASU to be built at the Phu My industrial park in Ba Ria will be the largest in The unit will be able to produce 35,000 normal cubic metres of air gases per hour and is scheduled to go on stream in 2014, Linde said. “It will supply the new steelworks that PSSV is currently building in Phu My with gaseous oxygen, nitrogen and argon, and also manufacture products for the regional market in southern (
http://www.icis.com/Articles/2012/08/29/9590636/germanys-linde-to-invest-40m-in-vietnam-air-separation-unit.html
CC-MAIN-2014-49
en
refinedweb
To change it all to boolean takes to long. there is only one error left. an out of bounderies error, could you see it? code: import java.util.*; import java.io.*; class GameOfLife{ // main... Type: Posts; User: ProgrammerNoobE To change it all to boolean takes to long. there is only one error left. an out of bounderies error, could you see it? code: import java.util.*; import java.io.*; class GameOfLife{ // main... When I execute my program it gives an error : java.lang.ArrayIndexOutOfBoundsException: 6 at Field.numNeighbours(GameOfLife.java:131) at Field.generationNextCalc(GameOfLife.java:164) at... Thank you again! I think I'm almost there, just have my doubts about the output My recent code: import java.util.*; import java.io.*; My new code: import java.util.*; import java.io.*; class GameOfLife{ // main class Field field = new Field(); It reads the file Then calculates the number of rows(lines) I added 2, because I want to make a border around it I tried to write a program such as the game of life, but without the sourcecode of the game of life. Just with the things I learned so far. It is not quit clean, but when it works it is fine to... Stupid of me! scanner = new Scanner (System.in); Thank you! Thank you for your fast reply! I tried to locate it, but I just don't see it. Only in this class the variabel numberOfGenerations is used and it is initialized just before the user is asked for... Hello, I need some help, because I just don't see it! This is my code: class GameOfLife{ // main class Field field = new Field(); Thank you for your clear explanation! But I have still 1 question, how do I add a new account with a name which is entered by the user and an initial balance of 0? I want to do that in the enroll... I don't understand you. I'm a beginner in Java, so I'm not familiair with everything. How do I define the accounts.equals in my Account class? And do you know why my accound.add(account); doesn't... Hello, The comments in the code are my specified questions about the code. Please help me with finding the proper solution. I searched on the web, but I did not find the right solution for my... I understand what you say, but I just don't know how it works, could you give me a hint? what I have right now: void enroll(String name){ if(account.equals(account)){ ... I don't know what you mean. Could you explain it with an example? I'm not so familiair with english terms, because I'm from Holland. [/COLOR]Your ArrayList holds accounts. Why are you passing a String into the contains() method?[/QUOTE] I've put name in it, because when the user types a command like enroll "name" it should only... My program looks now like this: import java.util.Scanner; import java.util.ArrayList; class Bank{ double balance; //Account account; Thank you PhHein, Am I in the right direction now? import java.util.Scanner; import java.util.ArrayList; class Bank{ Pleas help me! I have a problem with my ArrayList, I don't understand how the user input should be stored in the ArrayList and how I can print the ArrayList my code: import... Thank you very much, I searched in the coursereader for information about arraylists, but what I don't understand is how I have to print the accounts? import java.util.Scanner; import... I also tried it with this code: import java.util.Scanner; import java.util.ArrayList; class Bank{ double balance; //Account account; Thank you very much, it is working better now! Now I'm stuck on the array part. I don't really understand arrays and now I have to make a print of multiple accounts of people. How do I have to do... Hello PhHein, Thank you for your fast replies! Could you explain it a little bit more, I'm a dutch student and I'm not really good with English terms so maybe you could explain it with some... sorry for this mistake, I've changed it now This is the code I have so far, but it does not work :( I think I choose the wrong constructor, because in the assignment stands that the class Bank should be the main class, so that should be the... 4 Banking You are going to build a simple system of bank accounts. The program reads commands from input and executes them on the accounts (such as withdrawals, deposits, enrolling of new...
http://www.javaprogrammingforums.com/search.php?s=3d7f579a8236a9e9e925c19aa830bdeb&searchid=1203227
CC-MAIN-2014-49
en
refinedweb
Base types, Collections, Diagnostics, IO, RegEx… I recently gave a talk on CodeDom for compiler writers, and I thought I would adapt some of those slides here as a basic intro to CodeDom. CodeDom, or Code Document Object Model, is a feature which lets you generate code and assemblies in a language neutral way. It's also extensible, meaning it's possible to plug in a new provider and in theory get full support for that language. CodeDom works with two parts. First is the codedom tree, which is the language neutral part representing the code you want to generate. The tree describes all parts of your code, from namespaces, types and methods down to individual statements. There's some sample code farther down. You'll see that building a tree for your code takes a lot of code, but it's relatively straightforward. And for most people, creating CodeDom trees is the most interesting part about CodeDom. They are simply in the business of generating code or assemblies and don't care about anything language specific. The second part is providers which are language specific and know how to turn a tree into code and compile it. Compiler writers will be most interested in writing their own provider. Generally, though, if you want to generate code you don't need to be too concerned about providers. When should you use CodeDom? If you're generating code on behalf of the user, you should use CodeDom so you aren't limited to a particular language. This pertains to any code you might generate - as long as the user will see the code, there's a good chance they will want to control which language it's in. Also if you're creating dynamic assemblies at runtime, you might want to use CodeDom as an alternative to Reflection.Emit or directly writing out code and starting a separate process to run the compiler. Since the CodeDom model is closer to how one would write actual code, most people will find it easier than something as low level as Reflection.Emit. CodeDom is also a lot more maintainable than doing everything manually. using System;using System.CodeDom;using System.CodeDom.Compiler;using System.Reflection; public class Demo { public static void Main () { CodeCompileUnit ccu = new CodeCompileUnit(); //namespace Foo { CodeNamespace namesp = new CodeNamespace("Foo"); // using System; namesp.Imports.Add(new CodeNamespaceImport("System")); ccu.Namespaces.Add(namesp); // public class Bar { CodeTypeDeclaration barType = new CodeTypeDeclaration("Bar"); barType.TypeAttributes = TypeAttributes.Public; namesp.Types.Add(barType); // public void MyMethod() { CodeMemberMethod myMethod = new CodeMemberMethod(); myMethod.Name = "MyMethod"; myMethod.Attributes = MemberAttributes.Public; barType.Members.Add(myMethod); // Console.WriteLine("Hello World"); CodeMethodReferenceExpression methodRef = new CodeMethodReferenceExpression(); methodRef.MethodName = "WriteLine"; methodRef.TargetObject = new CodeTypeReferenceExpression(typeof(Console)); CodePrimitiveExpression methodArg = new CodePrimitiveExpression("Hello World"); CodeMethodInvokeExpression methodInvoke = new CodeMethodInvokeExpression(methodRef, new CodeExpression[] {methodArg}); myMethod.Statements.Add(new CodeExpressionStatement(methodInvoke)); // Do some extra checks to help mitigate code-injection security holes. CodeGenerator.ValidateIdentifiers(ccu); // change this line to generate code in C#, VB, jscript CodeDomProvider provider = CodeDomProvider.CreateProvider("VB"); provider.GenerateCodeFromCompileUnit(ccu, Console.Out, new CodeGeneratorOptions()); }} Also you might be interested to know about some of the parts of the Framework that use CodeDom today:
http://blogs.msdn.com/b/bclteam/archive/2005/03/15/396348.aspx
CC-MAIN-2014-49
en
refinedweb
You can subscribe to this list here. Showing 3 results of 3> ... Rien, The recent v1.1.42 release may interest you. We recently discussed various features that all amounted to some kind of server-level configuration options. Well, Spyce now has support for an optional server-level configuration file. In this file, you can specify the spyce path, modules to import at startup, a server-level error handler, and global server variables. See:. Enjoy, Rimon. -- * Rimon Barr Ph.D. candidate, Computer Science, Cornell University | barr@... - - Y!IM: batripler | | Understanding is a kind of ecstasy. +---- -- Carl Sagan Hi Rien, >with the kind of basic simple somewhat useless functionnalities, attached >is a spyce module which implements a counter for spyce scripts. It's simple, but it's certainly not useless. Thanks for your contribution! I altered it slightly as follows. I'll explain the changes, and my reasoning, below. (Your original code, is pasted at the very end of the message for reference.) #Counter module, v1.1 #Courtesy of 'rien' <rien@...> #Modifications by Rimon Barr <barr+spyce@...> from spyceModule import spyceModule import shelve __doc__ = '''Counter module allows to count the number of request made to a spyce script''' class counter(spyceModule): def start(self): count = shelve.open(os.path.join(os.path.dirname(self.wrapper.getFilename()), 'counter.db')) path = self.getModule('request').uri_path() try: self.count = count[path] = count[path] + 1 except: self.count = count[path] = 1 count.close() To use it, you would simply write: [[.module name=counter]] This page has been visited [[=counter.count]] times. -!) - It's easier to get the script filename from the spyce wrapper, ie. self.wrapper.getFilename(). I didn't expect you to know this, though, because none of the internal Spyce engine objects (including wrapper) are documented. - There is no reason to call the request module's uri() method numerous times. It's just expensive, and for no reason. I store the result in a local variable. - Python 1.5 does not support the += operator. - You can use exception handling (as above) to save some disk access. i.e. You don't need to lookup the existence of a key using has_key() in the disk-based hashtable. Just try and deal with the exceptions. - You don't need a get() method, when a field is sufficient. I changed the 'get()' method to a field called 'count'. This is more a matter of taste. >please Rimon, read it and check it to see if something is wrong with it. I hope that helps; sorry for the delay. Nice work for the first person aside from me to write a Spyce module! I know that the documentation in this regard is sparse, and I was glad to see that you no trouble writing one. Take care, Rimon. -- * Rimon Barr Ph.D. candidate, Computer Science, Cornell University | barr@... - - Y!IM: batripler | | Understanding is a kind of ecstasy. +---- -- Carl Sagan #Counter module, v1.0 #courtesy of 'rien' <rien@...> from spyceModule import spyceModule import shelve __doc__ = """Counter module allows to count the number of request made to a spyce script""" class counter(spyceModule): def start(self): self.request_module = self.getModule('request') self._count = shelve.open(self.request_module.env('SCRIPT_FILENAME') + '.count') if self._count.has_key(self.request_module.uri('path')): self._count[self.request_module.uri('path')] += 1 else: self._count[self.request_module.uri('path')] = 1 def finish(self, error): self._count.close() def get(self): return self._count[self.request_module.uri('path')]
http://sourceforge.net/p/spyce/mailman/spyce-users/?viewmonth=200209&viewday=11
CC-MAIN-2014-49
en
refinedweb
Hello, I've encountered a strange behavior in __declspec(dllexport) in my project. I have a C++ project that uses classes, namespaces, try-catches and more cpp elements. When exporting any dummy function in this DLL, no other C project will be able to load it with LoadLibrary (Getting error 'module not found'). Is it possible to load dynamically C++ dlls through C projects? These projects are Windows Mobile projects, but they should behave the same as on regular PC win32. I'm stuck on it and any help will be appreciated. Thank you, Emil.
http://cboard.cprogramming.com/cplusplus-programming/126318-dll-export-cplusplus.html
CC-MAIN-2014-49
en
refinedweb
More topics to be added soon A Class looks like public class Person{ String name; int age; public void displayMsg(){ System.out.println("Im " + name ); System.out.println("Im " + age + " years old"); } } public class UsePerson{ public static void main(String[] args) { // Create an object of type Person and referred by a variable personOne. Person personOne= new Person(); personOne.name = "Dara"; personOne.age = 20; personOne.displayMsg(); } } We have two classes in a same source file. Then, which name should we use as a file name ? You have to use the name of the class containing main() method. so we save the above source file as UsePerson.java. A Java application can have any number of classes but any one class should contain a main() method. During runtime, the JVM will search for the main() method and start executing the statements in main() method. Then what about public, static and void. That we will discuss in the remaining chapters. Using Person class, you can create any number of objects using new() operator.The syntax is To create an object for Person class, we use new Person() will create a object of type Person. personOne is a variable name to refer object of type Person Prev Next
http://fresh2refresh.com/java-tutorial/java-class-and-object/
CC-MAIN-2014-49
en
refinedweb
{- | This module provides the 'BBox1' type (mainly for completeness). -} module Data.BoundingBox.B1 where import Data.Vector.Class import Data.Vector.V1 import qualified Data.BoundingBox.Range as R -- | The 'BBox1' type is basically a 'Range', but all the operations over it work with 'Vector1' (which is really 'Scalar'). While it's called a bounding /box/, a 1-dimensional box is in truth a simple line interval, just like 'Range'. newtype BBox1 = BBox1 {range :: R.Range} deriving (Eq, Show) -- | Given two vectors, construct a bounding box (swapping the endpoints if necessary). bound_corners :: Vector1 -> Vector1 -> BBox1 bound_corners (Vector1 xa) (Vector1 xb) = BBox1 $ R.bound_corners xa xb -- | Find the bounds of a list of points. (Throws an exception if the list is empty.) bound_points :: [Vector1] -> BBox1 bound_points = BBox1 . R.bound_points . map v1x -- | Test whether a 'Vector1' lies within a 'BBox1'. within_bounds :: Vector1 -> BBox1 -> Bool within_bounds (Vector1 x) (BBox1 r) = x `R.within_bounds` r -- | Return the minimum endpoint for a 'BBox1'. min_point :: BBox1 -> Vector1 min_point = Vector1 . R.min_point . range -- | Return the maximum endpoint for a 'BBox1'. max_point :: BBox1 -> Vector1 max_point = Vector1 . R.max_point . range -- | Take the union of two 'BBox1' values. The result is a new 'BBox1' that contains all the points the original boxes contained, plus any extra space between them. union :: BBox1 -> BBox1 -> BBox1 union (BBox1 r0) (BBox1 r1) = BBox1 (r0 `R.union` r1) -- | Take the intersection of two 'BBox1' values. If the boxes do not overlap, return 'Nothing'. Otherwise return a 'BBox1' containing only the points common to both argument boxes. isect :: BBox1 -> BBox1 -> Maybe BBox1 isect (BBox1 r0) (BBox1 r1) = do r <- (r0 `R.isect` r1) return (BBox1 r) -- | Efficiently compute the union of a list of bounding boxes. unions :: [BBox1] -> BBox1 unions = BBox1 . R.unions . map range
http://hackage.haskell.org/package/AC-Vector-2.3.2/docs/src/Data-BoundingBox-B1.html
CC-MAIN-2014-49
en
refinedweb
iMessageQuestRewardFactory Struct ReferenceThis interface is implemented by the reward that sends a message to some entity (the behaviour will get this message). More... #include <tools/questmanager.h> Inheritance diagram for iMessageQuestRewardFactory: Detailed DescriptionThis. - 1.2 by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.2/structiMessageQuestRewardFactory.html
CC-MAIN-2014-49
en
refinedweb
Type: Posts; User: wklove2003 Here is where the declaration is. So 16 characters. #ifdef __APPLE_CC__ char mDevice[64]; #else char mDevice[16]; #endif Serial.hpp #ifndef SERIAL_HPP #define SERIAL_HPP class Serial { public: class SerialError { Most of this was written by someone else, so I am not sure why they did not use CString. Here is the string_buffer.cpp #include "internal.hpp" #include "string_buffer.hpp" ... bool Serial::connect() { if (mConnected) { printf("You are already connected, please disconnect first\n"); return false; } COMMTIMEOUTS cto; DCB dcb; On a seperate note, I do appreciate you guys for trying to help and I apologize. "The value of the error code is 2" That is saying that it cannot find the specified file correct? So I know that should be self explanatory, but can you explain to me what file it is talking... Sorry, I just havent been able to get a GetLastError() function to return anything. That is why I started trying other things. Seems to only cause problems when I try putting something in there. ... Ok, so when I try to step through in debug and putting @err,h in Watch window I get: Watch: @err,h CXX0026: Error: bad format string CallStack: > haas.exe!main(int aArgc, char * * aArgv) ... I am not getting any message, they are running the same way. No errors pop up in the call stack for either program. Nothing happens when I inserted the GetLastError function How do I call the GetLastError()? Sorry, this is all very new to me Is there a problem with this file? serial.win32.cpp-external dependency file bool Serial::connect() { if (mConnected) Okay, So I am new to programming and am trying to transfer data from a machine via RS232 to an "adapter" program that allows connection to a "client or agent" program that sends the data to the... Thank you, I think that solved the problem. Can't believe I didn't see that Hey Guys, So I am trying to implement a socket program that will allow me to transfer data from a machine and monitor it remotely. Having issues with a particular part of this program. I am a newb...
http://forums.codeguru.com/search.php?s=b0bf6d8abe9a0bbc024b91127a848ac3&searchid=5599609
CC-MAIN-2014-49
en
refinedweb
On collision you need to exit the loop. It only works on the last one because it will always run through the entire tree and always overwrite your 'collided' variable with the last "2" that was found. I would give you some advice on how to restructure this but I don't know how you are trying to use it. The simplest fix would be instead of collided = true; to simply do "return true;" remove the else and at the end of the loop do return false; But I would say you need to rewrite this completely since you can obviously index into your array. (you're already doing it) and only test against the tile that you need to know if it had a collision on, instead of running a loop testing all possible points. This is how i am using it for the moment. I have a MapLoader class that sends the Collision map, which is a vector of vectors to the Player class. In the Players Update that is where the for statement is. I am just using the collided variable for testing right now. In main.cpp I have a text function displaying the collided varible and its value.So when ever I move over the tiles that are twos it will show 1 and if not 0. Here is Player.cpp #include "Player.h" Player::Player() {} void Player::Destroy() { GameObject::Destroy(); } void Player::Init(MapLoader *colMap,ALLEGRO_BITMAP *image = NULL) { GameObject::Init(672, 554, 6, 6, 0, 0, 19, 15 ); SetID(PLAYER); SetAlive(true); collided = false; map = colMap->GetColMap(); lives = 3; score = 0; maxFrame = 4; curFrame = 0; frameDelay = 3; frameWidth = 36; frameHeight = 72; animationColumns = 4; animationDirection = 1; animationRow = 2; if(image != NULL) Player::image = image; } void Player::Update() { GameObject::Update(); if(x - boundX < 0) x = boundX; else if(x + boundX > WIDTH) x = WIDTH - boundX; if(y < 0) y = 0; else if(y > HEIGHT) y = HEIGHT; //Collision for(int i = 0; i < map.size(); i++) { for(int j = 0; j < map[i].size(); j++) { if(map[i][j] == 2) { if(x + 24 > j * 24 && x < (j * 24) + 24 && y + 24 > i * 24 && y < (i * 24) + 24 ) { collided = true; } else collided = false; } } } if(++frameCount >= frameDelay) { if(curFrame > 4) curFrame = 0; fx = (curFrame % animationColumns) * frameWidth; fy = animationRow * frameHeight; frameCount = 0; } } void Player::Render() { GameObject::Render(); //al_draw_rectangle(x + boundX,y + boundY,x - boundX,y - boundY,al_map_rgb(0,255,0),5); al_draw_bitmap_region(image, fx, fy, frameWidth, frameHeight, x - frameWidth / 2, y - frameHeight / 2, 0); } void Player::MoveUp() { animationRow = 3; curFrame++; dirY = -1; } void Player::MoveDown() { animationRow = 0; curFrame++; dirY = 1; } void Player::MoveLeft() { animationRow = 1; curFrame++; dirX = -1; } void Player::MoveRight() { animationRow = 2; curFrame++; dirX = 1; } void Player::ResetAnimation(int position) { if(position == 1) { dirY = 0; } else { dirX = 0; } } void Player::Collided(int objectID) { if(objectID == ENEMY) al_draw_rectangle(x + boundX,y + boundY,x - boundX,y - boundY,al_map_rgb(0,255,0),5); //lives++; }
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=5018781
CC-MAIN-2014-49
en
refinedweb
11 December 2012 23:30 [Source: ICIS news] (updates throughout) HOUSTON (ICIS)--Dow Chemical’s US Gulf coast investment plans, including a new cracker in ?xml:namespace> “The ethane flexibility actions, along with actions to restart a cracker in St Charles, Louisiana, together boost [Dow's] global ethylene capacity by about 20%,” a company spokesperson told ICIS. Dow Chemical’s total propylene capacity will increase by about 900,000 tonnes/year and its total ethylene capacity by about 2.1m tonnes/year, the spokesperson said. Dow said previously that its planned cracker at Freeport, Texas, will have an ethylene capacity of 1.5m tonnes/year with start-up expected for 2017. Earlier on Tuesday, officials disclosed that Dow applied to the US Environmental Protection Agency (EPA) for an air-quality permit to build the
http://www.icis.com/Articles/2012/12/11/9623321/us-gulf-investments-to-raise-dows-ethylene-capacity-by.html
CC-MAIN-2014-49
en
refinedweb
Annotation Transformers in TestNG: The Sweet Spot for Annotations? In the continuing search to find the balance between XML and annotations, TestNG has introduced the concept of annotation transformers. Conceived of by TestNG co-founder Alexandru Popescu (who is also InfoQ's Chief Architect), an annotation transformer is code that will override the behavior of annotations elsewhere in your project. This allows you to modify your annotation without using XML and without recompiling your source. You will have to recompile your annotation transformers if you change them. Cedric Beust details the idea of annotation transformers and compares the pros and cons of XML vs annotations. His summary of XML vs annotations is: ...annotations allow you to put your configuration system close to the Java source it applies to, but changing it requires Java knowledge and a recompilation. On the other hand, XML files are usually easy to modify and they can be reread at runtime (sometimes even without relaunching your application), but they are very verbose and the edition can be error prone. Beust refers to an idea he had back in 2004 of using XML to override annotations as something that is unlikely to take off, for good reason. Instead, TestNG 5.3 includes annotations transformers, which allow developers to programmatically override annotations, via the IAnnotationTransformer interface. An example of their use is to override the number of times a test is invoked. For example: public class Mytest { @Test(invocationCount = 10) public void verify() { // ... } } This test annotation could be transformed to change the invocation count to a higher number: public class MyTransformer implements IAnnotationTransformer { public void transform(ITest annotation, Class testClass, Constructor testConstructor, Method testMethod) { if ("verify".equals(testMethod.getName())) { annotation.setInvocationCount(15); } } } Other example by Alex Popescu However, I am very interested to see more usage scenarios from our users. It took me and Cedric a while to figure out the details of this feature, so we are looking forward for possible ways to improve it. ./alex -- .w( the_mindstorm )p. TestNG co-founder EclipseTestNG Creator Re: Other example by Cedric Beust On an unrelated note, it's kind of scary to be pointed back to a blog entry I posted more than two years ago and that I didn't even remember writing. I stand by what I said back then, though :-) -- Cedric credits by Floyd Marinescu
http://www.infoq.com/news/2006/10/annotation-transformers
CC-MAIN-2014-49
en
refinedweb
Monitoring. If the Windows Firewall is in use, modify it to allow Remote Administration access. This will open the MS-RPC port and others as needed. The following command entered in a Command Prompt can be used: netsh firewall set service RemoteAdmin enable) option and then disable it. Create a new local account on the Windows device for monitoring. We assume in the remainder of these steps that this account was named zenossmonbut any valid account name can be used. Place the account only in the Users group and not in the Power Users or Administrators groups. Optionally, create a new user group for monitoring and use that group instead of the account in the remaining steps. Give the zenossmonaccount DCOM access by running the dcomcnfg utility. In the Component Services dialog box, expand Component Services, expand Computers, and then right-click My Computer and click Properties . In the My Computer Properties dialog box, click the COM Security tab. Under Access Permissions, click Edit Limits. In the Access Permission dialog box, add the zenossmonaccount to the list and ensure that the Remote Access checkbox is enabled, then click to close the dialog. Under Launch and Activation Permissions, click Edit Limits. In the Access Permission dialog box, add the zenossmonaccount to the list and ensure that the Remote Launch and Remote Activation checkboxes are enabled, then click to close the dialog. Click My Computer Properties dialog to save all changes.on the Give the zenossmonaccount permissions to read the WMI namespace by using WMI Control. Open the My Computer. Select from the menu.menu and right-click on In the Computer Management dialog, expand the Services and Applications item and then right-click on WMI Control. In the WMI Control Properties dialog, click the Security tab. Expand the Root namespace, select the CIMV2 namespace folder and then click Security. In the Security for ROOT\CIMV2 dialog, add the zenossmonuser to the list and ensure the Enable Account and Remote Enable checkboxes are enabled, then click to close the dialog. In the WMI Control Properties dialog click to close the dialog and save all changes.. To gather Windows performance data from PerfMon permissions on the winregregistry key must be granted to our monitoring user by using regedit. Run regedit. Browse to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurePipeServers\winregkey. Right-click on the winregkey and choose Permissions. Add the monitoring user to the permissions list and grant only Readpermissions Give the zenossmonaccount access to read the Windows Event Log. Once the appropriate changes are made, test that Event Log access works with your zenossmonuser by running the following from your Zenoss system: wmic -U '.\zenossmon' // myhostname\ 'SELECT Message FROM Win32_NTLogEvent WHERE LogFile="Application"' If you are using SP1 or newer with Windows Server 2003, then you will need to allow non-administrative users to access the service control manager in order to monitor services. At a command prompt, run the following: sc sdset SCMANAGER D:(A;;CCLCRPRC;;;AU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)S:(AU;FA;KA;;;WD) (AU;OIIOFA;GA;;;WD) Warning The above command should be one line. At this point you should be able to query Windows service status remotely using the non-administrative account. This can be tested by running the following from your Zenoss system: wmic -U '.\zenossmon' // myhostname'SELECT Name FROM Win32_Service'
http://community.zenoss.org/docs/DOC-8906
CC-MAIN-2014-49
en
refinedweb
Download FREE PDF Vaadin is a web application development framework that allows you to build web applications much as you would with traditional desktop frameworks, such as AWT or Swing. A UI is built hierarchically from user interface components contained in layout components. User interaction is handled in an event-driven manner. Vaadin supports both a server-side and a client-side development model. In the server-side model, the application code runs on a server, while the actual user interaction is handled by a client-side engine that runs in the browser. The client-server communications and client-side technologies, such as HTML and JavaScript, are invisible to the developer. The clientside engine runs as JavaScript in the browser, so there is no need to install plug-ins. Figure 1: Vaadin Client-Server Architecture The client-side development model allows building new client-side widgets and user interfaces with the GWT toolkit included in Vaadin. The widgets can be integrated with server-side component counterparts to enable using them in server-side applications. You can also make pure client-side UIs, which can communicate with a back-end service. A server-side Vaadin application consists of one or more UI classes that extend the com.vaadin.UI class and implement the init() method. @Title("My Vaadin UI") public class HelloWorld extends com.vaadin.UI { @Override protected void init(VaadinRequest request) { // Create the content root layout for the UI VerticalLayout content = new VerticalLayout(); setContent(content); // Display the greeting content.addComponent(new Label("Hello World!")); } } Normally, you need to: Optionally, you can also: Figure 2: Architecture for Vaadin Applications You can create a Vaadin application project easily with the Vaadin Plugin for Eclipse, with NetBeans, or with Maven. You can get a reference to the UI object associated with the currently processed request from anywhere in the application logic with UI.getCurrent(). You can also access the current VaadinSession, VaadinService, and VaadinServlet objects in the same way. Event Listeners In the event-driven model, user interaction with user interface components triggers server-side events, which you can handle with event listeners. In the following example, we handle click events for a button with an anonymous class: Button button = new Button("Click Me"); button.addClickListener(new Button.ClickListener() { public void buttonClick(ClickEvent event) { Notification.show("Thank You!"); } }); layout.addComponent(button); Value changes in a field component can be handled correspondingly with a ValueChangeListener. By setting the immediate property of a component to true, user interaction events can be fired immediately when the focus changes. Otherwise, they are delayed until the first immediate interaction, such as a button click. In addition to the event-driven model, UI changes can be made from the server-side with server push. Deployment Vaadin applications are deployed to a Java application server as web applications. A UI runs as a Java Servlet, which needs to be declared in a web.xml deployment descriptor, or with the @WebServlet and @VaadinServletConfiguration annotations in a Servlet 3.0 capable server as follows: <web-app> <display-name>myproject</display-name> <servlet> <servlet-name>HelloWorld UI</servlet-name> <servlet-class>com.vaadin.server.VaadinServlet</servlet-class> <init-param> <description>Vaadin application class to start</description> <param-name>application</param-name> <param-value>com.example.myproject.HelloWorld</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>HelloWorld UI</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> The VaadinServlet handles server requests and manages user sessions and UIs. All that is normally hidden, but you may need to do some tasks in the custom servlet class. The Eclipse plugin creates the servlet class as an inner class of the UI class. In a Servlet 2.4 capable server, you need to use a web.xml. Vaadin UIs can also be deployed as portlets in a portal. Vaadin components include field, layout, and other components. The component classes and their inheritance hierarchy is illustrated in Figure 4. Component Properties Common component properties are defined in the Component interface and the AbstractComponent base class for all components. Field Properties Field properties are defined in the Field interface and the AbstractField base class for fields. Sizing The size of components is defined in the Sizeable interface. It can be fixed, relative, or undefined in either dimension (width or height). Notice that a layout with an undefined size must not contain a component with a relative (percentual) size. Validation Field values can be validated with validate() or isValid(). You add validators to a field with addValidator(). Fields in a FieldGroup can all be validated at once. Built-in validators are defined in the com.vaadin.data.validator package and include: You can also implement a custom Validator by defining its validate() method. Fields in a FieldGroup bound to a BeanItem can be validated with the Bean Validation API (JSR-303), if an implementation is included in the class path. Resources Icons, embedded images, hyperlinks, and downloadable files are referenced as resources. Button button = new Button("Button with an icon"); button.setIcon(new ThemeResource("img/myimage.png")); External and theme resources are usually static resources. Connector resources are served by the Vaadin servlet. Figure 3: Resource Classes and Interfaces Figure 4: Vaadin Components and Data Model The layout of a UI is built hierarchically from layout components, or more generally component containers, with the actual interaction components as the leaf nodes of the component tree. You start by creating a root layout and set it as the UI content with setContent(). Then you add the other components to that with addComponent(). Single-component containers, most notably Panel and Window, only hold a single content component, just as UI, which you must set with setContent(). The sizing of layout components is crucial. Their default sizes are marked in Figure 4, and can be changed with the sizing methods described earlier. Notice that if all the components in a layout have relative size in a particular direction, the layout may not have undefined size in that direction! Margins Setting setMargin(true) enables all margins for a layout, and with a MarginInfo parameter you can enable each margin individually. The margin sizes can be adjusted with the padding property (as top, bottom, left, and right padding) in a CSS rule with a corresponding v-top-margin, v-bottommargin, v-left-margin, or v-right-margin selector. For example, if you have added a custom mymargins style to the layout: .mymargins.v-margin-left {padding-left: 10px;} .mymargins.v-margin-right {padding-right: 20px;} .mymargins.v-margin-top {padding-top: 30px;} .mymargins.v-margin-bottom {padding-bottom: 40px;} Spacing Setting setSpacing(true) enables spacing between the layout slots. The spacing can be adjusted with CSS as the width or height of elements with the v-spacing style. For example, for a vertical layout: For a GridLayout, you need to set the spacing as left/top padding for a .v-gridlayout-spacing-on element:For a GridLayout, you need to set the spacing as left/top padding for a .v-gridlayout-spacing-on element: .v-vertical > .v-spacing {height: 50px;} } .v-gridlayout-spacing-on { padding-left: 100px; padding-top: 50px; } Alignment When a layout cell is larger than a contained component, the component can be aligned within the cell with the setComponentAlignment() method as in the example below: VerticalLayout layout = new VerticalLayout(); Button button = new Button(“My Button”); layout.addComponent(button); layout.setComponentAlignment(button, Alignment.MIDDLE_CENTER); Expand Ratios The ordered layouts and GridLayout support expand ratios that allow some components to take the remaining space left over from other components. The ratio is a float value and components have 0.0f default expand ratio. The expand ratio must be set after the component is added to the layout. VerticalLayout layout = new VerticalLayout(); layout.setSizeFull(); layout.addComponent(new Label(“Title”)); // Doesn’t expand TextArea area = new TextArea(“Editor”); area.setSizeFull(); layout.addComponent(area); layout.setExpandRatio(area, 1.0f); Also Table supports expand ratios for columns. Custom Layout The CustomLayout component allows the use of a HTML template that contains location tags for components, such as <div location="hello"/>. The components are inserted in the location elements with the addComponent() method as shown below: CustomLayout layout = new CustomLayout("mylayout"); layout.addComponent(new Button("Hello"), "hello"); The layout name in the constructor refers to a corresponding .html file in the layouts subfolder in the theme folder, in the above example layouts/mylayout.html. See Figure 5 for the location of the layout template file. Hundreds of Vaadin add-on components are available from the Vaadin Directory, both free and commercial. You can download them as an installation package or retrieve with Maven, Ivy, or a compatible dependency manager. Please follow the instructions given in Vaadin Directory at. Most add-ons include widgets, which need to be compiled to a project widget set. In an Eclipse project created with the Vaadin Plugin for Eclipse, select the project and click the Compile Vaadin widgets button in the tool bar. Vaadin allows customizing the appearance of the user interface with themes. Themes can include Sass or CSS style sheets, custom layout HTML templates, and graphics. Basic Theme Structure Custom themes are placed under the VAADIN/themes/ folder of the web application (under WebContent in Eclipse projects). This location is fixed and the VAADIN folder specifies that these are static resources specific to Vaadin. Each theme has its own folder with the name of the theme. A theme folder must contain a styles.scss (for Sass) or a styles.css (for plain CSS) style sheet. Custom layouts must be placed in the layouts sub-folder, but other contents may be named freely. Custom themes need to inherit a base theme. The built-in themes in Vaadin 7 are reindeer, runo, and chameleon, as well as a base theme on which the other built-in themes are based. Sass Themes Sass (Syntactically Awesome StyleSheets) is a stylesheet language based on CSS3, with some additional features such as variables, nesting, mixins, and selector inheritance. Sass themes need to be compiled to CSS. Vaadin includes a Sass compiler that compiles stylesheets on-the-fly during development, and can also be used for building production packages. To enable multiple themes on the same page, all the style rules in a theme should be prefixed with a selector that matches the name of the theme. It is defined with a nested rule in Sass. Sass themes are usually organized in two files: a styles.scss and a theme-specific file such as mytheme.scss. Figure 5: Theme Contents With this organization, the styles.scss would be as follows: @import "mytheme.scss"; /* Enclose theme in a nested style with the theme name. */ .mytheme { @include mytheme; /* Use the mixin defined in mytheme.scss */ } The mytheme.scss, which contains the actual theme rules, would define the theme as a Sass mixin as follows: /* Import a base theme.*/ @import "../reindeer/reindeer.scss"; @mixin mytheme { /* Include all the styles from the base theme */ @include reindeer; /* Insert your theme rules here */ .mycomponent { color: red; } } Every component has a default CSS class based on the component type, and you can add custom CSS classes for UI components with addStyleName(), as shown below. Applying a Theme You set the theme for a UI with the @Theme annotation. @Theme("mytheme") @Title("My Vaadin UI") public class MyUI extends com.vaadin.UI { @Override protected void init(VaadinRequest request) { … // Display the greeting Label label = new Label("This is My Component!"); label.addStyleName("mycomponent"); content.addComponent(label); } Vaadin allows binding components directly to data. The data model, illustrated in Figure 4, is based on interfaces on three levels of containment: properties, items, and containers. Properties The Property interface provides access to a value of a specific class with the setValue() and getValue() methods. All field components provide access to their value through the Property interface, and the ability to listen for value changes with a Property.ValueChangeLis¬tener. The field components hold their value in an internal data source by default, but you can bind them to any data source with setPropertyDataSource(). Conversion between the field type and the property type is handled with a Converter. For selection components, the property value points to the item identifier of the current selection, or a collection of item identifiers in the multiSelect mode. The ObjectProperty is a wrapper that allows binding any object to a component as a property. Items An item is an ordered collection of properties. The Item interface also associates a property ID with each property. Common uses of items include form data, Table rows, and selection items. The BeanItem is a special adapter that allows accessing any Java bean (or POJO with proper setters and getters) through the Item interface. Forms can be built by binding fields to an item using the FieldGroup utility class. Containers A container is a collection of items. It allows accessing the items by an item ID associated with each item. Common uses of containers include selection components, as defined in the AbstractSelect class, especially the Table and Tree components. (The current selection is indicated by the property of the field, which points to the item identifier of the selected item.) Vaadin includes the following built-in container implementations: Also, all components that can be bound to containers are containers themselves. Buffering All field components implement the Buffered interface that allows buffering user input before it is written to the data source. The FieldGroup manages buffering for all bound fields. The easiest way to create new components is through composition with the CustomComponent. If that is not enough, you can create an entirely new component by creating a client-side GWT (or JavaScript) widget and a server-side component, and binding the two together with a connector, using a shared state and RPC calls. Figure 6: Integrating a Client-Side widget with a Server-Side API Shared state is used for communicating component state from the server-side component to the client-side connector, which should apply them to the widget. The shared state object is serialized by the framework. You can make RPC calls from both client-side to the server-side, typically to communicate user interaction events, and vice versa. To do so, you need to implement an RPC interface. Defining a Widget Set A widget set is a collection of widgets that, together with inherited widget sets and the communication framework, forms the Client-Side Engine of Vaadin, when compiled with the GWT Compiler into JavaScript. A widget set is defined in a .gwt.xml GWT Module Descriptor. You need to specify at least one inherited base widget set, typically the DefaultWidgetSet or other widget sets. <module> <inherits name="com.vaadin.DefaultWidgetSet" /> </module> The client-side source files must normally be located in a client sub-package under the package of the descriptor. You can associate a stylesheet with a widget set with the <stylesheet> element in the .gwt.xml descriptor: <stylesheet src="/sites/all/modules/dzone/assets/refcardz/085/mycomponent/styles.css"/> Widget Project Structure Figure 7 illustrates the source code structure of a widget project, as created with the Vaadin Plugin for Eclipse. Figure 7: Widget Project Source Structure
http://refcardz.dzone.com/refcardz/vaadin-update
CC-MAIN-2014-49
en
refinedweb
From the developer standpoint, the automated help component, or help system, is often the last consideration when building a Java application. For users, however, the help system is an invaluable asset when learning a new application. As the demand for more full-featured and reliable application help systems has increased, so has the time and productivity burden on application developers. Fortunately, the Java platform includes an API just for building application help systems. In this article, you'll learn how to use the JavaHelp 2.0 API to build a standard, full-featured help system for a simple Java application. You'll start by building a basic application help system that includes a set of topic files, a set of navigation files, and a set of data files. You'll then learn how to make the help system accessible from your Java application and customize it with text- or image-based navigation, pre-set fonts, layered presentation windows, and a searchable database. You'll also learn how to implement context-sensitive features, embed the help system directly into your application, merge multiple help systems into one, and create custom lightweight components for your help system. I'll conclude the article with a quick tour of JavaHelp 2.0's server-side help system framework. Note: This article assumes you are familiar with the design considerations of building a help system and with enterprise application development on the Java platform. Some experience with Swing GUI development will also be helpful. Getting started In this article, you'll build a JavaHelp system for a Tax Calculator application. The Tax Calculator is shown in Figure 1. You'll find the example source code in the Resources section at the end of the article. Locate the Tax Calculator by running FirstLook. Figure 1. The Tax Calculator's About page In Figure 2 you can see the navigational setup for the Tax Calculator help system. Note that some of the icons used for navigation have been highlighted for later discussion. Figure 2. Navigating the help system Using the HelpSet Each JavaHelp help system contains a set of files called the HelpSet. Together, these files provide the foundation of a working application help system. The JavaHelp HelpSet includes three types of files: - HelpSet data files - Navigation files - Topic files You'll find the HelpSet files for the example Tax Calculator help system in the Resources section. The HelpSet data files are TaxCalculatorHelpSet.hs and TaxCalculatorMap.jhm. The navigation files are TaxCalculatorIndex.xml, TaxCalculatorTOC.xml, and TaxCalculatorGlossary.xml. The topic files reside in the folder TaxCalculator. Please use these files to follow the discussion in the sections that follow. The HelpSet data files There are two HelpSet data files named the helpset file and the map file, respectively.The helpset file is the master control file for your help system. It must have the file extension .hs. The map file is used to associate a map ID to each help topic for navigational purposes. The map file has the file extension .jhm. I'll go over each HelpSet file type in detail. The helpset file The helpset file provides the foundation of a working application help system. A typical helpset file has a structure like the one in Listing 1. Listing 1. The helpset file <helpset version="2.0"> <!-- maps section --> <maps>...</maps> <!-- views section --> <view>...</view> ... <!-- presentation section --> <presentation>...</presentation> <!-- implementation section --> <impl>...</impl> </helpset> You'll note that the typical helpset file has four sections: the maps section, the views section, the presentation section, and the implementation section. When a user accesses your help system, the system starts by reading the .hs file. It uses the map file specified in the maps section to find the needed topic files, the navigation files found in the views section to create navigation views, and so on. I'll use the Tax Calculator help system as an example to show how the four sections of the helpset file actually work together. The maps section The <maps> element specifies the map file, which I'll discuss further below. Listing 2 shows the maps section for TaxCalculatorHelpSet.hs. Note that TaxCalculatorMap.jhm is the map file. It resides in the same directory as TaxCalculatorHelpSet.hs. Listing 2. The maps section <maps> <homeID>overview</homeID> <mapref location="TaxCalculatorMap.jhm" /> </maps> The views section The helpset file can specify one or more <view> elements, which specify the help system's navigation views along with their navigation files. Navigation views are displayed in the navigation pane (see Figure 2). When setting up your help system, you can choose from the following views: javax.help.GlossaryView javax.help.TOCView javax.help.IndexView javax.help.FavoritesView javax.help.SearchView Listing 3 specifies a Glossary view and the location of its navigation file. Listing 3. A Glossary view element <view> <name>Glossary</name> <label>Glossary</label> <type>javax.help.GlossaryView</type> <data>TaxCalculatorGlossary.xml</data>  </view> You can use anything you like as the <name> and <label> for a view. The <type> element specifies the navigational view for your help system, and the <data> element specifies the path to the navigation file. If you specify the  <toolbar> <helpaction>javax.help.BackAction</helpaction> <helpaction image="addfav_icon">javax.help.FavoritesAction</helpaction> </toolbar> </presentation> The value of element <name> -- Main_Window -- can be called in Java code. You will use the <title> element to specify the title of the help window, and the <image> element to specify a title-bar icon (see the first panel of Figure 2). Note that icon is defined in the map file below. The <toolbar> element is also configurable. If you do not specify a value, your help system will include some default toolbar buttons. Each <helpaction> corresponds to a toolbar button in the help window toolbar. JavaHelp provides eight types of helpaction, as follows: BackAction ForwardAction SeparatorAction HomeAction ReloadAction PrintAction PrintSetupAction FavoritesAction So, referring back to Figure 2, you'll see that the toolbar buttons are (from left to right): BackAction, ForwardAction, SeparatorAction, HomeAction, ReloadAction, SeparatorAction, PrintAction, PrintSetupAction, and FavoritesAction. You can change the icon for the toolbar buttons by specifying the image attribute of the <helpaction> elements. Note that addfav_icon is defined in the map file below. Setting the default attribute of the <presentation> tag to true will make this the default presentation for your help system. The implementation section In the <impl> element, you can specify the types of files that can be displayed in the content viewer. For example, the code in Listing 5 would enable the user to view HTML and PDF files. Listing 5. The implementation section <impl> <helpsetregistry helpbrokerclass="javax.help.DefaultHelpBroker" /> <viewerregistry viewertype="text/html" viewerclass="com.sun.java.help.impl.CustomKit" /> <viewerregistry viewertype="application/pdf" viewerclass="Your PDF Editor Kit" /> </impl> The map file The map file is the other file type found in the HelpSet data files. It associates a map ID to each help topic by mapping the ID string to the URL of the help topic file. The helpset file and navigation files always refer to help topics by means of map IDs. The map file can assign map IDs to any type of file. Listing 6 shows the map file for the Tax Calculator help system. Listing 6. TaxCalculatorMap.jhm <map version="2.0"> ... <mapID target="overview" url="TaxCalculator/overview.htm" /> <mapID target="icon" url="images/icon.gif" /> <mapID target="glossary_icon" url="images/g.gif" /> <mapID target="addfav_icon" url="images/addfav_icon.gif" /> ... </map> Navigation files Returning to the original HelpSet, you can see that the next file type is navigation files. The four kinds of navigation files are TOC, Index, Glossary, and Favorites. The help system reads the information in these files to build the four types of navigation views and then displays them in the navigation pane. I'll go over the properties of each navigation file in detail. TOC Each <tocitem> element can have the attributes text, target, and image, as shown in Listing 7. The value of the text attribute is the text label of a tocItem. The value of the target attribute is a map ID representing a help topic file. The image attribute specifies a picture displayed to the left of a tocItem. If you do not specify an image, the help system will employ a default one, as shown in the fourth part of Figure 2. Listing 7 shows part of the TOC file for the Tax Calculator help system. Listing 7. TaxCalculatorTOC.xml <toc version="2.0"> ... <tocitem text="Pages" image="topLevel"> <tocitem text="About Page" target="about" image="secondLevel" /> <tocitem text="Color Chooser" target="colorChooser" /> </tocitem> ... </toc> Index Each <indexitem> element can have the attributes text and target. The value of the text attribute is the text label of an indexItem. The value of the target attribute is a map ID representing a help topic file. Listing 8 shows part of the index file for the Tax Calculator help system. Listing 8. TaxCalculatorIndex.xml <index version="2.0"> ... <indexitem text="Color"> <indexitem text="Changing Color" target="changeColor"/> <indexitem text="Color Chooser" target="colorChooser"/> </indexitem> ... </index> Glossary Glossary files also use the <indexitem> element, as shown in Listing 9. Listing 9. TaxCalculatorGlossary.xml <index version="2.0"> ... <indexitem text="Button" target="button_def"/> ... </index> Favorites The help system generates the Favorites.xml file automatically. It is stored in usrdir/javahelp. When more than one JavaHelp system is installed in the same machine, only the Favorites.xml of the latest running help system will be kept. Topic files Topic files are in HTML format. It is advisable to specify the <title> tags in the HTML files, because the <title> tags will be used in the search database for the full text search -- you'll learn about that shortly. After you've coded the topic files, navigation files, the map file and the helpset file, you can open up the helpset by running hsviewer.jar in %JAVAHELP_HOME%\demos\bin. Figure 3 shows what happens when you browse TaxCalculatorHelpSet.hs and click Display. Figure 3. The Tax Calculator HelpSet in the hsviewer Invoking JavaHelp from a Java application After you've built a basic help system, you'll want to be able to invoke it from your Java application. The JavaHelp system can be invoked via a click on a button or a menu item. Listing 10 shows two approaches to invoking the help system using a button. For either approach, the first step is to create a new helpset (as you've done in the previous exercises) and create a help broker for it. You can then use the help broker's enableHelpOnButton() method to call a help system from a button or, alternatively, you could simply add an action listener called CSH.DisplayHelpFromSource(). When the button is clicked, the action listener will get the helpID for the action source and display the helpID in the help viewer. Both approaches are shown in Listing 10. Listing 10. ButtonHelp.java ... help_but=new JButton("Help"); ... Classloader loader = ButtonHelp.class.getClassLoader(); URL url = HelpSet.findHelpSet(loader, "TaxCalculatorHelpSet"); try { hs = new HelpSet(loader, url); } catch (HelpSetException e) { ... } HelpBroker helpbroker = hs.createHelpBroker("Main_Window"); /**-----the first way of calling a help system------------ helpbroker.enableHelpOnButton(help_but, "overview", hs); /**-----the second way of calling a help system----------- ActionListener contentListener = new CSH.DisplayHelpFromSource(helpbroker); CSH.setHelpIDString(help_but, "overview"); help_but.addActionListener(contentListener); */ Invoking the help system from a menu item is a similar procedure. See ButtonHelp.java in the article source code (in Resources) to learn more about invocation with a button, and MenuItemHelp.java to learn about invocation with a menu item. Customizing the look and feel Although the default JavaHelp help system is good for starting out, you likely will want to customize its look and feel to better fit with your application. In this section, you'll learn how to configure special icons for GUI components, customize the navigation tabs and toolbar, and set the help system fonts. Configuring icons The JavaHelp title bar icon, toolbar button icons, and navigation tab icons are configurable in the helpset file. The title bar icon is set using the <presentation> tag; a toolbar button icon can be set as an attribute of the <helpaction> element of the <presentation> tag; and a navigation tab icon can be set in the <view> tag. Additionally, the navigation file item icons can be set in the navigation file. Text or image-based tabs? You can use the default image tabs for your help system or use textual ones. To use textual tabs, set the displayviewimages attribute of the <presentation> tag to false in TaxCalculatorHelpSet.hs. For example, the code <presentation default="true" displayviewimages="false"> will result in the text-based tabs shown in the second panel of Figure 4. Figure 4. Image tab and textual tab Note that the words displayed on the textual tabs shown in Figure 4 are the values of the <label> elements of the <view> tags in the helpset file TaxCalculatorHelpSet.hs. Toolbar or no toolbar? It's possible to display the help window without a toolbar. To decide which option you like best, run ButtonHelp.class from the ButtonHelp.java file and click the Help button. You'll see two help windows, one with a toolbar and the other without it. If you prefer no toolbar, simply create a <presentation> tag without the <toolbar> element in the helpset file TaxCalculatorHelpSet.hs. Setting the font It's fairly easy to set the font for your help system. For example, the code Font font = new Font("SansSerif", Font.ITALIC, 10); helpbroker.setFont(font); will result in the custom font display shown in Figure 5b as compared to the default shown in Figure 5b. Figure 5a. Default font display Figure 5b. Custom font display Working with presentation windows There are three types, or layers, of help window in a JavaHelp help system: - The main window can be iconized, resized, moved by the user, and closed by the user. The main window contains a toolbar, a navigation pane, and a content viewer by default. - The secondary window can be iconized, resized, moved by the user, and closed either by the user or when the help viewer is closed. It can contain a toolbar and a navigation pane, but by default it will contain only a content viewer. - The popup window is especially used for context-sensitive help, although it can be customized for multiple purposes. This window cannot be resized or moved and is closed when it is no longer in focus. It contains only a content viewer. Moving information between the different presentation windows is simple; you can do it by configuring the topic files or by calling functions in Java code. Embedding a <a href=url> tag in a given topic file will result in the linked file being opened in the current window when called, although you will need to do a little more if you want to display a new topic file in a secondary window or popup window (discussed further below). For more finely tuned presentation, you can use the methods with the argument presentation in the JavaHelp API. For example, the enableHelpOnButton() of the HelpBroker class would appear as follows: enableHelpOnButton(java.lang.Object obj, java.lang.String id, HelpSet hs, java.lang.String presentation, java.lang.String presentationName) If you go this route, you willneed to choose main window, secondary window, or popup window for the argument to presentation. For example, the following line would enable the user to open the help topic in a secondary window by clicking the button help_but: helpbroker.enableHelpOnButton(help_but, "overview", hs, "javax.help.SecondaryWindow", "main"); The <object> tag If you choose to work with topic files rather than calling functions in Java code, then you will need to add the <object> tag to the HTML files to move files between different types of windows. Listing 11 shows how to open a secondary window using the <object> tag. Listing 11. Open a secondary window <object CLASSID="java:com.sun.java.help.impl.JHSecondaryViewer"> <param name="id" value="about"> <param name="viewerActivator" value="javax.help.LinkLabel"> <param name="iconByName" value="../images/topLevel.gif"> <param name="viewerSize" value="350,400"> <param name="viewerName" value="TaxCal"> </object> The <object> tag takes multiple parameters. Table 1 introduces the 15 valid parameters to the <object> tag for the secondary window ( CLASSID="java:com.sun.java.help.impl.JHSecondaryViewer). Table 1. Valid parameters to the object tag The four kinds of activators An activator is an object the user clicks to activate a presentation window (popup or secondary). Using the parameter viewerActivator and the parameter text or icon will generate four kinds of activators: Text Label, Image Label, Text Button, or Image Button, as shown in Figure 6. (Note that the second activator is the result of the code in Listing 11.) Figure 6. The four kinds of activator As one of the results of clicking the second or the fourth activator on the About page shown in Figure 6, you would find the invoked secondary window almost the same as the main window. If you refer back to the <presentation> tag of the helpset file, you'll see that you set the Main_Window presentation to default. Because the secondary window also uses this default presentation, it also has a navigation pane and toolbar. See about.htm for the complete code and demonstrations of the invocation of presentation windows. Adding context-sensitive help Context-sensitive help is an especially user-friendly way to deliver information to your users. When a user clicks a particular icon or field -- such as the Calculation fields shown in the examples below -- a pop-up window explains the function or next step that goes with that field. Context-sensitive help can be classified in one of three categories: - Window-level help: When the Java application's window has the focus, the user presses the Help key (F1) to launch the help system with a specific topic, which usually is an introductory topic. - Field-level help: When a specific component on the Java application's GUI, such as a text field or button, has the focus, the user presses the Help key (F1) or clicks a button to launch the help system with a specific topic describing the current component. - Screen-level help: The user clicks a button to invoke the help system with a specific topic that describes the current screen of the application. I'll go over the basics of invoking each type of context-sensitive help in the sections that follow. See the file ContextSensitiveHelp.java for the complete source to go with the code examples. Window-level help Window-level help is invoked when a user is in a given application window and decides that he or she needs help. The user can invoke window-level help by pressing the Help key (F1). The invoked help system is shown in Figure 7. Figure 7. Window-level help invoked with the Help key (F1) In order to enable the Help key (F1) for the Tax Calculator window shown in Figure 7, you would call helpbroker.enableHelpKey(getRootPane(), "personal", hs);. Notice that I have set the help topic with map ID personal for the help system. Note that helpbroker.enableHelpKey() works only with the rootPane of a JFrame in a Swing-based application or a java.awt.Window in an AWT-based application. I will discuss how you can enable the Help key for field level components shortly. Field-level help Field-level help is launched from focus, meaning the location of the cursor, and a press of the Help key. Listing 12 shows how to use CSH.DisplayHelpFromFocus() to enable the Help key for field-level help. Listing 12. Field-level help invoked with the Help key CSH.setHelpIDString(maid_tf, "maid"); CSH.setHelpIDString(course_tf, "course"); CSH.setHelpIDString(income_tf, "income"); CSH.setHelpIDString(result_tf, "result"); ActionListener listener = new CSH.DisplayHelpFromFocus(hs, "javax.help.MainWindow", null); maid_tf.addActionListener(listener); course_tf.addActionListener(listener); income_tf.addActionListener(listener); result_tf.addActionListener(listener); So, continuing the example from above, you click the Calculation tab to switch to the calculation page and move the cursor to the first text field on the page, which is called maid_tf. When the cursor is flashing inside maid_tf, press F1 to display the Maid Levy help topic in the main window, as shown in Figure 8. Figure 8. Field-level help invoked with the Help key It is also possible to set up field-level help to be invoked from a button such as the one with an arrow shown in Figure 9. For this, you would use CSH.DisplayHelpAfterTracking(), as shown below: help_but.addActionListener(new CSH.DisplayHelpAfterTracking(hs,"javax.help.Popup", "popup")); Notice the additional pop-up window to present the help topics. When you click the arrow button and move the cursor to the maid_tf field, and then click the mouse, a popup window with help topic Maid Levy will be displayed, as shown in Figure 9. Figure 9. Field-level help invoked with a button Screen-level help Screen-level help is invoked when a user in a given application screen requests information (or help) about that screen by clicking a button. In this example, I have a tabbed pane with four different screens. Listing 13 shows one way to display help according to the current screen. Listing 13. Screen-level help menu.addChangeListener(new ChangeListener() { public void stateChanged(ChangeEvent evt) { int sel = menu.getSelectedIndex(); if(sel==0) CSH.setHelpIDString(help_but1, "about"); else if(sel==1) CSH.setHelpIDString(help_but1, "preference"); else if(sel==2) CSH.setHelpIDString(help_but1, "personal"); else if(sel==3) CSH.setHelpIDString(help_but1, "calculation"); } }); ... help_but1.addActionListener(new CSH.DisplayHelpFromSource(hs, "javax.help.SecondaryWindow", "secondary")); The object help_but1 represents the button with the word Help in Figure 10. I add a ChangeListener for the tabbed pane menu. Whenever the focus is changed to a new screen, use CSH.setHelpIDString() to set a new helpID for the help_but1. Notice that there is an additional action listener called CSH.DisplayHelpFromSource() for the help_but1. When the user clicks the button, the action listener will get the current helpID for the action source (that is, help_but1) and display the helpID in the help viewer. Therefore, regardless of how the user switches screens (using tabs or menu items), the help system will display correct information for the current screen. Figure 10 shows that when the user switches to the Preference screen the specific topic describing preferences is displayed. Figure 10. Screen-level help Adding embedded help After you have built a default online help system, it is possible to embed it directly into your application's interface. Listing 14 shows how to embed a TOC view into the example Tax Calculator application. toc is a JPanel, and the tabbedpane will associate the toc JPanel to the tab TOC. The argument for getNavigatorView() must be the same as the view's name defined in the helpset file. It is case sensitive. You can embed other navigation views into the application in a similar way. Listing 14. EmbeddedHelp.java viewer = new JHelpContentViewer(hs); viewer.setPreferredSize(new Dimension(250,220)); navTOC = (JHelpTOCNavigator)hs.getNavigatorView("TOC").createNavigator(viewer.getModel()); navTOC.setPreferredSize(new Dimension(150,220)); toc.add(navTOC); toc.add(viewer); Figure 11 shows the results of embedding the TOC into the Tax Calculator application. Note that the help system will be displayed only when the menu item Show is checked. Figure 11. Embedded help Adding search functionality Search functionality is an essential part of your help system. In order to utilize this functionality, you will need to create a search database. The JavaHelp API includes a feature to automatically index your help topics directory and build the search database for you. Follow these steps to create the search database: - Set the directory where your help topics reside as the current directory. src\TaxCalculator is the directory for the example. - Run the %JAVAHELP_HOME%\javahelp\bin\jhindexerTaxCalculatorcommand. Remember that src\TaxCalculator is the directory where the topic files are stored. This will create an indexed folder named JavaHelpSearch in the current directory. - Run the %JAVAHELP_HOME%\javahelp\bin\jhsearchJavaHelpSearchcommand to verify the validity of the search database. If you see the message initialized; enterquery, then the search database has been successfully created. Stop words Stop words are common words that will deliver no result when searched. Below are the default stop words used when creating the search database. Figure 12 shows what happens if you search for the word how in the search view of EmbeddedHelp. No result is displayed because how is a stop word by default. Figure 12. Search results with default stop words Customizing the stop words list It is possible to create a custom list of stop words. For example, let's say that you wanted to exclude the word how from the default list of stop words. There are two ways to do this, both requiring the use of a configuration file. For the first method, you would create a configuration file (config.txt) and then store the stop words in a different file named stopWords.txt. The file config.txt would then specify the location of the stop words file using the line StopWordsFile stopWords.txt. Listing 15 shows the custom stop words file, in which the word how is excluded. Note that every stop word must start with a new line in stopWords.txt. Listing 15. stopWords.txt a all ... his if ... You can also contain all of your stop words directly in the configuration file. For this, you simply create a configuration file named config1.txt and add the stop words directly in the configuration file, as shown here: StopWords a, all,..., his, if... To index with the custom list of stop words you would make src your current directory and run the command %JAVAHELP_HOME%\javahelp\bin\jhindexer -c config.txt TaxCalculator. Replace config.txt in the command with your own configuration file. For example, if you run the jhindexer command with -c config1.txt and then re-run the search for the word how in the EmbeddedHelp view, it will return several result entries, as shown in Figure 13. Figure 13. Search results with custom stop words The red circle in the search window indicates the relevance of the result entry to the query. The more filled in the circle is, the more relevant the result entry is. There are five levels of relevance. In Figure 13, the number indicates the number of times the query has been matched in the result file. The text to the right of the number is extracted from the <title> tag of the help system's topic files. Merging helpsets Large, modularized applications may require the creation of numerous helpsets, perhaps even by different teams working on various aspects of the application. It can be helpful to the user to view each helpset separately, but he or she may also want to see the entire helpset (or topic list) as one. To enable this, it is possible to merge helpsets. There are four merge types in JavaHelp 2.0: SortMerge, UniteAppendMerge, AppendMerge, and NoMerge. The four merge types and their features are listed in Table 2. Table 2. JavaHelp 2.0 merge types Each merge type is specified in the helpset file or in the navigation files. In order to properly merge, the name of views of the existing helpset and the new helpsets must be the same. You can merge helpsets statically or dynamically. Static and dynamic merging You can merge helpsets statically by adding the <subhelpset> tag in the existing helpset file. If you use the absolute path, the location should have the prefix file:\, as in: <subhelpset location="file:\...\helpset.hs">. If you use the relative path, the location should look something like this: <subhelpset location="helpset.hs">. You can merge helpsets dynamically using the methods hs.add(hs1); hs.remove(hs1). hs is the existing helpset and hs1 is the new helpset. To see the results of a dynamic merging operation, see MergeHelp.java in the article source and run MergeHelp. The results In Figure 14 there are two helpset TOCs and then the results of merging them. Figure 14. Merge The merged helpset in the third panel of Figure 14 is the result of adding the attribute mergetype="javax.help.UniteAppendMerge" to the existing helpset's TOC view tag, as shown in Listing 16. Listing 16. Mergetype attribute -- TaxCalculatorHelpSet.hs <view mergetype="javax.help.UniteAppendMerge"> <name>TOC</name> <label>TOC</label> <type>javax.help.TOCView</type> <data>TaxCalculatorTOC.xml</data> </view> The fourth panel of Figure 14 shows the result of adding the attribute mergetype="javax.help.SortMerge" to the tocitem Pages in the TOC navigation file of the existing helpset, as shown in Listing 17. Note the different presentation of the Pages topic in the third and fourth panels. In the third panel, the elements of the new view are appended at the end of the existing view. In the fourth panel, the elements of the new view and the existing view are sorted alphabetically. Listing 17. Mergetype attribute -- TaxCalculatorTOC.xml <tocitem text="Pages" image="topLevel" mergetype="javax.help.SortMerge"> ... </tocitem> When the menu item Add is checked, the two helpsets are merged. When the menu item is unchecked the application removes the second helpset from the first. Adding lightweight components The Java platform's lightweight component class has many applications in customizing your help system. For example, you've already seen how to use the lightweight component class JHSecondaryViewer and the <object> tag to create and open secondary and popup windows. In this section, you'll build on that exercise to create your own lightweight component and then manipulate it using the <object> tag. You'll develop a component class, LightWeightCom, that plays an audio clip when the user clicks an image. See the article source, LightWeightCom.java and LightWeightComBeanInfo.java, to follow along with the examples in this section. To see the results, go to about.htm. The BeanInfo class shown in Listing 18 provides explicit information about the lightweight component. It must extend SimpleBeanInfo. getPropertyDescriptors() is the only method used by ContentViewer in this class. Listing 18. LightWeightComBeanInfo.java public PropertyDescriptor[] getPropertyDescriptors() { PropertyDescriptor back[] = new PropertyDescriptor[4]; try { back[0] = new PropertyDescriptor("iconByName", LightWeightCom.class); back[1] = new PropertyDescriptor("iconByID", LightWeightCom.class); back[2] = new PropertyDescriptor("audioByName", LightWeightCom.class); back[3] = new PropertyDescriptor("audioByID", LightWeightCom.class); return back; } catch (Exception ex) { return null; } } The lightweight component class must directly extend java.awt.Component or java.awt.Container, or a class that implements a lightweight component. The example extends JLabel, which is a class implementing a lightweight component. If your component will make use of the information from View, then you must implement the setViewData() method of the com.sun.java.help.impl.ViewAwareComponent interface. For the example audio component I will make use of View's helpset and document base information, as shown in Listing 19. Listing 19. LightWeightCom.java -- implement setViewData public class LightWeightCom extends JLabel implements ViewAwareComponent { ... public void setViewData(View v) { myView = v; doc = (HTMLDocument) myView.getDocument(); base = doc.getBase(); // Loop through and find the JHelpContentViewer Component c = container = (Component) myView.getContainer(); while (c != null) { if (c instanceof JHelpContentViewer) { break; } c = c.getParent(); } // Get the helpset if there was JHelpContentViewer if (c !=null) { TextHelpModel thm = ((JHelpContentViewer)c).getModel(); if (thm != null) { hs = thm.getHelpSet(); } } } ... } The lightweight component should be able to accept parameters. For example, Listing 20 is designed to accept the param audioByID. The setAudioByID method uses the helpset information retrieved from setViewData(). The method gets the topic file location using map ID, as shown below. Listing 20. LightWeightCom.java -- setAudioByID public void setAudioByID(String name) { sound = null; URL url=null; Map map = hs.getCombinedMap(); try { url = map.getURLFromID(ID.create(name, hs)); } catch (java.net.MalformedURLException e2) { return; } sound = Applet.newAudioClip(url); } After compiling LightWeightCom.java and LightWeightComBeanInfo.java, add their classes to your classpath and add the lines in Listing 21 to your help topic files. In this example, I add them to about.htm. As a result, when the image labels are clicked, different audio clips will be played. Note that attribute CLASSID of the <object> tag must start with java:; otherwise, the help viewer will ignore it. Listing 21. Object tag in about.htm <OBJECT CLASSID="java:LightWeightCom"> <param name="iconByID" value="top"> <param name="audioByID" value="music"> </OBJECT> <OBJECT CLASSID="java:LightWeightCom"> <param name="iconByName" value="../images/leaf2.gif"> <param name="audioByName" value="../audio/voice.au"> </OBJECT> Server-based help Server-based applications also need online help. In this section, you will learn how to present your help system to users on a network. In order to follow the exercises in this section, it will be helpful if you're familiar with the Tomcat Web server, as well as the basics of JavaBeans and JavaServer Pages (JSP) technologies, along with JavaScript and HTML scripts. Setting up I'll use the Tomcat Web server 4.1.18 for the examples in this section. If you do not have Tomcat version 4.0 or higher installed on your development machine, you should install it now. (See Resources for more information.) For the purpose of the exercises, create a folder in the webapps directory on the Tomcat Web server and name the folder TaxCalculatorHelp. For the simplest possible server-based help setup, I'll reuse the code from JavaHelp 2.0's serverhelp demo. You'll find this code in the %JAVAHELP_HOME%/demos/serverhelp/web directory. Copy all the .js, .html, .jsp, and .tld files and the subfolder images to your new TaxCalculatorHelp folder. Copy your own helpset files to this folder, too. Finally, in TaxCalculatorHelp, create a folder called WEB-INF and two subfolders called classes and lib, respectively. Copy jh.jar to WEB-INF/lib, and you should be set. See the article source for an example of this setup. The JavaHelp server bean ServletHelpBroker is the JavaBeans component that stores help state information, such as the helpset in use, the current ID, the current navigation view, and other pieces of help information. Line 1 of Listing 22 defines the help broker. Lines 2 and 3 set up the help broker for a specific helpset by providing the helpSetName. Lines 4 to 6 merge a new helpset to the existing HelpSet. If the merge attribute is set to false, the help broker will only work for the existing HelpSet. Listing 22. JavaHelp server bean 1. <jsp:useBean 2. <jh:validate 4. <jh:validate JavaScript files There are several important JavaScript files. tree.js is used to build a tree. The navigation trees for the TOC and Index views can be created using this file. You can use the code in Listing 23 to build a tree. The file searchList.js can be used to build a tree for the Search view. util.js checks whether any change in the content has occurred. If a change has occurred, an update will be fired with the change. Listing 23. Build a tree indexTree = new Tree(name, lineHeight, selectColor, showIcon, expandAll) indexTree.addTreeNode(parent, idnum, icon, content, helpID, URLData, expandType) indexTree.drawTree(); indexTree.refreshTree(); JSP files There are several important JSP files to review. navigator.jsp is used to get the views from the helpset file. javax.help.TOCView.jsp, javax.help.SearchView.jsp, and javax.help.IndexView.jsp each build their corresponding views. The help.jsp file controls the overall presentation of the help window. The top frame of the help window in Figure 15 shows banner. You may create your own banner by modifying the banner.html file. You can also exclude the banner. The lower-left frame of Figure 15 contains the file navigator.jsp and a tree navigator. The lower-right frame contains the file toolbar.html and the help topic content viewer. You can change the GUI presentation by moving the frame to another location and by including and/or excluding frames. Let's look at the navigator.jsp file first. Listing 24. navigator.jsp <jh:navigators <td classtableDefStyle <A class=tabbedAnchorStyle <IMG src="<%= iconURL!=""? iconURL : "images/" + className +".gif" %>" Alt="<%= tip %>" border=0> </A> </td> </jh:navigators> Table 3 lists all the JSP extensions. Note that all JSP extensions must start with the notation jh:. Table 3. JSP extensions Note that the scripting variables are nested within the body of the JSP extensions navigators, tocItem, indexItem, and searchTOCItem. Therefore, Listing 24 gets all views from the helpset file using the nested variable name. The table below shows the variables you can use between the <jh:navigators> beginning tag and ending tag. Table 4. Navigator variables Now, let's look at the file javax.help.TOCView.jsp. Listing 25. javax.help.TOCView.jsp tocTree = new Tree("tocTree", 16, "ccccff", true, false); <% TOCView curNav = (TOCView)helpBroker.getCurrentNavigatorView(); %> <jh:tocItem tocTree.addTreeNode("<%= parentID %>","<%= nodeID %>","<%= iconURL!=""?iconURL:"null" %>", "<%= name %>","<%= helpID %>","<%= contentURL!=""?contentURL:"null" %>", "<%= expansionType%>" ); </jh:tocItem> tocTree.drawTree(); tocTree.refreshTree(); This code builds a TOC navigation tree for the current helpset. The navigation trees for the Search and Index views can be created in a similar way. You already know the JSP extensions tocItem, indexItem, and searchTOCItem, and their attributes. Next, you will learn their nested variables, which can be called between the <jh:tocItem>, <jh:indexItem>, and <jh:searchTOCItem> beginning and ending tags, from the Tables 5 and 6 below. Table 5. Features of the tocItem and indexItem variables Table 6. Features of the searchTOCItem variable Testing server-side help The final step of any development process is, of course, to test the results of your work. To test the server-side help system, follow these steps: - Run the Tomcat server - Open your Web browser - Go to the URL Following the links on the Web page, you'll find the server-based help system. When following the first link, you will see a window the same as the one in Figure 15a. When you follow the second link, you will see a window the same as the one in Figure 15b. Figure 15a. Server-based help Figure 15b. Server-based help Conclusion This article has served as an introduction to JavaHelp 2.0, the Java platform's help system API. With JavaHelp, you can easily incorporate a full-featured, standard help system into any Java application, component, or device. A standalone JavaHelp help system can run on any platform and can be embedded into any application. With JavaHelp 2.0, it is also possible to develop a robust, although not so full-featured, help system for users on a network. JavaHelp 2.0 has many great features, which we've begun to explore together in this article. Through step-by-step explanation and exercises, you have learned how to create and manipulate the topic files, navigation files and helpset data files at the core of the JavaHelp 2.0 help system. You have also learned how to customize your helpset, embed it into your existing Java application, merge helpsets, create lightweight component add-ons for your help system, offer your users context-sensitive help, and more. We concluded the article with an introduction to the server-side help features of the JavaHelp 2.0 API. From here, I encourage you to practice what you've learned. Study the article source and try building different features and different types of help systems using the JavaHelp 2.0 API. See the article Resources for further reference. Download Resources - See the Java technology home page to download the latest J2SE JDK. - If you don't already have it, you'll need to install Tomcat 4.0 or higher to complete the exercises in this article. - See Sun's JavaHelp home page to learn more about the JavaHelp 2.0 API. - For further reference, see the JavaHelp 2.0 system user's guide. - The article "Sun Officially Unveils JavaHelp 2.0 Beta" (WinWriters.com, 2003) offers additional insight on the pros and cons of JavaHelp 2.0. - For a book-length introduction to the JavaHelp API, see Creating Effective JavaHelp by Kevin Lewis (O'Reilly, 2000). - You'll find articles about every aspect of Java programming in the developerWorksJava.
http://www.ibm.com/developerworks/java/library/j-javahelp2/
CC-MAIN-2014-49
en
refinedweb
When. Some things to be clear about up front. At the time of writing this, Angular is in beta version 3. The Angular team claims that no breaking changes will be introduced in the betas, but I figured it is best we lay it out on the table that it could be a possibility. I also want to be clear that a lot of this code was co-produced with a friend of mine, Todd Greenstein. This tutorial will be broken up into two parts: We won’t be doing anything fancy here such as file manipulations or storing them in a database. Our purpose is just to show how it is done. There aren’t many requirements here, but you can see what is necessary below: The core of this tutorial will use NPM for getting our dependencies. The first thing we want to do is create a very simplistic Angular application. We will make sure to use TypeScript since not only does it come recommended by the Angular team, but it is also very convenient to use. From the Command Prompt (Windows) or Terminal (Mac and Linux), execute the following commands to create our file and directory structure: mkdir src touch src/index.html touch src/tsconfig.json mkdir src/app touch src/app/app.html touch src/app/app.ts npm init -y Of course make sure you do this in a root project directory. I have my project root at ~/Desktop/FrontEnd. With the project structure in place, let’s grab the Angular dependencies for development. They can be installed by running the following from your Command Prompt or Terminal: npm install angular2@2.0.0-beta.2 systemjs typescript live-server --save Yes you’ll need Node and the Node Package Manager (NPM) to install these dependencies. Before we start configuring TypeScript and our project, let’s define how it will be run. This can be done via the package.json file that was created when running the npm init -y command. In the scripts section of the file, replace it with the following: "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "tsc": "tsc -p src", "start": "live-server --open=src" }, This will allow us to run npm run tsc to compile our TypeScript files and npm run start to create our simple server. Next up is configuring how our TypeScript files get compiled. These settings end up in our src/tsconfig.json file. Open it and add the following code: { "compilerOptions": { "target": "ES5", "module": "commonjs", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false } } Pretty standard configuration file. Time to start building the Angular application. The file upload stuff will happen towards the end. Open the project’s src/index.html file so we can include the Angular dependencies in our code. Make sure yours looks like the following: <html> <head> <title>File Uploading</title></script> <script> System.config({ packages: {'app': {defaultExtension: 'js'}} }); System.import('app/app'); </script> </head> <body> <my-app>Loading...</my-app> </body> </html> Again we’re basically just including the Angular JavaScript dependencies and telling our application to make use of the src/app/app.js file that will be generated when compiling the src/app/app.ts file. The <my-app> tag will be populated when the App class loads. Now it makes sense to get the base of our project’s src/app/app.ts file started. Open it and include the following code. I’ll explain what is happening after: import { Component, View } from "angular2/core"; import { bootstrap } from "angular2/platform/browser"; @Component({ selector: "my-app", templateUrl: "app/app.html", directives: [] }) class App { constructor() { } } bootstrap(App, []); This is a basic TypeScript file for Angular. We’re creating an App class and bootstrapping it at the end. This class maps to the src/app/app.html template as defined in the @Component tag. This is where things could start to get weird. Open the src/app/app.html file and include the following code: <input type="file" (change)="fileChangeEvent($event)" placeholder="Upload file..." /> <button type="button" (click)="upload()">Upload</button> Our HTML view will be very basic. Just a file-upload form element and an upload button. However there are two important pieces here. First, take note of the (change) tag in the file input. If you’re familiar with AngularJS 1 and Angular and how they use the ng-model or [(ngModel)] you might be wondering why I’m not just using that. The short answer on why we can’t use [(ngModel)] is that it doesn’t work. In the current version of Angular, it doesn’t map correctly which is why I had to cheat with the change event. When a file is chosen from the picker, the fileChangeEvent function is called and an object is passed. We’re looking for object.target.files, but let’s not get ahead of ourself here. The whole purpose of the change event is to store reference to the files in question so that when we click the upload button, we can upload them. This is where the (click) event of the upload button comes into play. When the user clicks it, we will use the reference obtained in the change event and finalize the upload via a web request. Let’s jump into our project’s src/app/app.ts file again. We need to expand upon it. Make it look like the following: import { Component, View } from "angular2/core"; import { bootstrap } from "angular2/platform/browser"; @Component({ selector: "my-app", templateUrl: "app/app.html", directives: [] }) class App { filesToUpload: Array<File>; constructor() { this.filesToUpload = []; } upload() { this.makeFileRequest("", [], this.filesToUpload).then((result) => { console.log(result); }, (error) => { console.error(error); }); } fileChangeEvent(fileInput: any){ this.filesToUpload = <Array<File>> fileInput.target.files; } makeFileRequest(url: string, params: Array<string>, files: Array<File>) { return new Promise((resolve, reject) => { var formData: any = new FormData(); var xhr = new XMLHttpRequest(); for(var i = 0; i < files.length; i++) { formData.append("uploads[]", files[i], files[i].name); } xhr.onreadystatechange = function () { if (xhr.readyState == 4) { if (xhr.status == 200) { resolve(JSON.parse(xhr.response)); } else { reject(xhr.response); } } } xhr.open("POST", url, true); xhr.send(formData); }); } } bootstrap(App, []); We just added a lot of code so we should break it down. Let’s start with what we know based on the src/app/app.html file that we created. The first thing that happens in our UI is the user picks a file and the fileChangeEvent is triggered. The event includes a lot of information that is useless to us. We only want the File array which we store in a filesToUpload scope. The second thing that happens is we click the upload button and the upload function is triggered. The goal here is to make an asynchronous request passing in our array to an end-point that we’ve yet to build. This is where things get even more crazy. As of Angular beta version 3, there is no good way to upload files. JavaScript’s FormData is rumored to work with very sketchy behavior, something we don’t want to waste our time with. Because of this we will use XHR to handle our requests. This is seen in the makeFileRequest function. In the makeFileRequest function we create a promise. We are going to loop through every file in the File array even though for this particular project we’re only passing a single file. Each file is appended to the XHR request that we’ll later send. The promise results happen in the onreadystatechange function. If we get a 200 response let’s assume it succeeded, otherwise lets say it failed. Finally, fire off the request. To receive files from an upload in Node.js we need to use a middleware called Multer. This middleware isn’t particularly difficult to use, but that doesn’t matter, I’m going to give you a step by step anyways! In another directory outside your front-end project, maybe in ~/Desktop/BackEnd, execute the following with your Terminal (Mac and Linux) or Command Prompt (Windows): npm init -y We’ll need to install the various Node.js, Express and Multer dependencies now. Execute the following: npm install multer express body-parser --save We can now start building our project. Multer will save all our files to an uploads directory because are going to tell it to. Create it at the root of our Node.js project by executing: mkdir uploads Now create a file at the root of your project called app.js and add the following code: var express = require("express"); var bodyParser = require("body-parser"); var multer = require("multer"); var app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"); next(); }); app.post("/upload", multer({dest: "./uploads/"}).array("uploads", 12), function(req, res) { res.send(req.files); }); var server = app.listen(3000, function() { console.log("Listening on port %s...", server.address().port); }); Breaking it down, we are first requiring all the dependencies that we downloaded. We are using Express Framework and allowing the body-parser to accept POST requests. Currently the front-end and back-end are separated. Running them as of now will require different host names or ports. Because of this we need to allow cross origin resource sharing (CORS) within our application. Now we get into the small bit of Multer. When a POST request hits the /upload endpoint Multer will place the files in the uploads directory. The files are discovered via the uploads property defined in .array("uploads[]"). This property is also defined in our Angular front-end in the XHR request. Finally we operate our Node.js server on port 3000. As of right now the application is split into two parts, the front-end and back-end. For this example it will remain that way, but it doesn’t have to. Starting with the back-end, execute the following from the Terminal or Command Prompt: node app.js Remember, the back-end must be your current working directory. With the back-end serving on, navigate to the front-end project with a new Terminal or Command Prompt. From the front-end directory, execute the following: npm run tsc npm run start The above commands will compile all the TypeScript files and then start the simple HTTP server. You can now navigate to from your browser. We saw quite a bit here, however most of this was related to configuring either Angular or Node.js. The actual file upload requests were rather small. Now you might be wondering why I didn’t just choose to use the Angular File Upload library. I couldn’t figure out how to get it working. It is a bummer that as of right now these upload features aren’t baked into Angular through some kind of function, but at least we have XHR and I found it to get the job done with minimal effort. If you want to see a more complex version of this example, check out this project I worked on with Todd Greenstein on GitHub.
https://www.thepolyglotdeveloper.com/2016/02/upload-files-to-node-js-using-angular-2/
CC-MAIN-2021-10
en
refinedweb
37 Java Collections Interview Questions And Answers – Collections in Java Interview Questions For Experienced 2020 from Codingcompiler. Test your Java Collections knowledge by answering these tricky interview questions on Java collections. Let’s start learning Java collections interview questions and prepare for Java interviews. All the best for your future and happy learning. Java Collections Interview Questions - Define the concept of “collection”. - What are the benefits of using collections? - What data can store collections? - What is the hierarchy of collections? - What do you know about collections of type List? - What do you know about collections like Set? - What do you know about collections like Queue? - What do you know about collections of type Map, what is their fundamental difference? - What are the main implementations of List, Set, Map? - What implementations of SortedSet do you know and what are their features? - What are the differences / similarities between List and Set? - What is different / common for the ArrayList and LinkedList classes, when is it better to use ArrayList, and when is LinkedList? - When is it wise to use an array rather than an ArrayList? - What is the difference between ArrayList and Vector? - What do you know about the implementation of the HashSet and TreeSet classes? - What is the difference between HashMap and TreeMap? How do they work and work? What with access time to objects, what dependences? - What is a Hashtable, how does it differ from a HashMap? To date, it is deprecated, how to still use the necessary functionality? - What will happen if we put two values with the same key in Map? - How is the order of objects in the collection, how to sort the collection? - Give the definition of “iterator”. - What is the functionality of the Collections class? - How to get a non-modifiable collection? - What collections are synchronized? - How to get synchronized collection from non-synchronized? - How to get a collection only for reading? - Why is the Map not inherited from Collection? - What is the difference between Iterator and Enumeration? - How is the foreach loop implemented? - Why is there no iterator.add () method to add elements to the collection? - Why is there no method in the iterator class to get the next item without moving the cursor? - What is the difference between Iterator and ListIterator? - What are the ways to iterate through all the elements of the List? - What is the difference between fail-safe and fail-fast properties? - What should I do to prevent a ConcurrentModificationException? - What is the stack and the queue, what are the differences between them? - What is the difference between Comparable and Comparator interfaces? - Why collections do not inherit the Cloneable and Serializable interfaces? Related Java Collections Interview Questions & Answers The theme of the Java collections is incredibly extensive, and in order to answer each question a separate article is deeply needed for almost every question. I recommend reading the additional material indicated in the responses. 1. Define the concept of “collection”. Collections / containers in Java are called classes whose main purpose is to store a set of other elements. 2. What are the benefits of using collections? Arrays have significant drawbacks. One of them is the final size of the array, as a result, the need to monitor the size of the array. Another is indexing, which is not always convenient, since limits the ability to add and delete objects. To get rid of these shortcomings for several decades, programmers have been using recursive data types , such as lists and trees . The standard set of Java collections serves to relieve the programmer from the need to independently implement these types of data and provides it with additional features. 3. What data can store collections? Collections can store any reference data types. 4. What is the hierarchy of collections? It should be noted here that interface Map is not part of the interface Collection hierarchy. With Java 1.6, the TreeSet and TreeMap classes implement the NavigableSet and NavigableMap interfaces, which extend the SortedSet and SortedMap interfaces, respectively (SortedSet and SortedMap extend Set and Map). 5. What do you know about collections of type List? List is an ordered list. Objects are stored in the order they are added to the list. Access to the elements of the list by index. 6. What do you know about collections like Set? Set – a set of non-repeating objects. In the collection of this type only one reference of the type null is allowed . 7. What do you know about collections like Queue? Queue is a collection designed to store items in the order they need to be processed. In addition to the basic operations of the Collection interface, the queue provides additional insertion, retrieval, and control operations. Queues are usually, but not necessarily, ordered elements in a FIFO (first-in-first-out, “first in, first out”) order. The offer () method inserts an item into the queue; if that fails, it returns false . This method differs from the Add ()method of the Collection interface in that the add () method can unsuccessfully add an element only using unchecked exceptions. The remove () and poll () methods remove the top of the queue and return it. Which item will be deleted (first or last) depends on the queue implementation. The remove () and poll () methods differ only in the behavior when the queue is empty: the remove () method throws an exception, and the poll () method returns null . The element () and peek () methods return (but do not remove) the top of the queue. java.util.Queue <E> implements a FIFO buffer. Allows you to add and retrieve objects. In this case, objects can be obtained in the order in which they were added. Implementations : java.util.ArrayDeque <E> , java.util.LinkedList <E> . java.util.Deque <E> inherits java.util.Queue <E> . Bidirectional turn. Allows you to add and remove objects from two ends. It can also be used as a stack. Implementations : java.util.ArrayDeque <E> , java.util.LinkedList <E> . 8. What do you know about collections of type Map, what is their fundamental difference? The java.util.Map <K, V> interface is used to map each element from one set of objects (keys) to another (values). Moreover, each element of the set of keys is mapped to exactly 1 element of the set of values. At the same time, one element from the set of values may correspond to 1, 2 or more elements from the set of keys. The java.util.Map <K, V>interface describes the functionality of associative arrays. Implements : java.util.HashMap <K, V> , java.util.LinkedHashMap <K, V> , java.util.TreeMap <K, V> , java.util.WeakHashMap <K, V> . java.util.SortedMap <K, V> inherits java.util.Map <K, V> . Implementations of this interface provide storage of key set elements in ascending order (see java.util.SortedSet). Implementations : java.util.TreeMap <K, V> . 9. What are the main implementations of List, Set, Map? Top Java Collections Interview Questions And Answers 10. What implementations of SortedSet do you know and what are their features? java.util.SortedSet <E> inherits java.util.Set <E> . Implementations of this interface, in addition to monitoring the uniqueness of stored objects, support them in ascending order. The order relationship between objects can be defined using either the java.lang.Comparable <T> interface’s compareTo method , or a special comparator class that inherits the java.util.Comparator <T> interface . Implementations: java.util.TreeSet <E> is a collection that stores its elements as ordered by the values of the tree. TreeSet encapsulates a TreeMap, which in turn uses a balanced binary red-black tree to store items. TreeSet is good because the operations add, remove, and contains take guaranteed log (n) time. 11. What are the differences / similarities between List and Set? Both are inherited from Collection , which means they have the same set of method signatures. List stores objects in the order of insertion, the item can be obtained by index. Set cannot store the same elements. 12. What is different / common for ArrayList and LinkedList when it is better to use ArrayList, and when is LinkedList? ArrayList is implemented internally as a regular array . Therefore, when inserting an element into the middle, you must first move all elements one after it, and only then insert a new element into the vacant space. But it quickly implements taking and changing an element — get, set operations — because in them we simply refer to the corresponding element of the array. LinkedList is implemented internally in a different way. It is implemented as a linked list : a set of separate elements, each of which stores a link to the next and previous elements. To insert an element in the middle of such a list, it is enough to change the links of its future neighbors. But to get the item with the number 130, you need to walk consistently on all objects from 0 to 130. In other words, the operation set and get immediately implemented very slowly . Look at the table: If you need to insert (or delete) in the middle of the collection a lot of elements, it is better to use LinkedList. In all other cases – ArrayList. LinkedList requires more memory to store the same number of elements, because besides the element itself, there are also pointers to the next and previous elements of the list, whereas in ArrayList the elements just go in order. 13. When is it wise to use an array rather than an ArrayList? In short, Oracle writes – use ArrayList instead of arrays. If you need to answer this question differently, you can say the following: arrays can be faster and eat less memory. Lists lose performance due to the ability to automatically increase in size and related checks. Plus, the list size does not increase by 1, but by a larger number of elements (+15) *. Also, access to [10] in an array can be faster than calling get (10) on a list. * The reader has sent a comment. “ArrayList has a 1.5-fold increase. int newCapacity = oldCapacity + (oldCapacity >> 1); “. 14. What is the difference between ArrayList and Vector? Vector deprecated. At Vector, some methods are synchronized and therefore they are slow. In any case, Vector is not recommended at all. Advanced Java Collections Interview Questions And Answers 15. What do you know about the implementation of the HashSet and TreeSet classes? The name Hash … comes from the concept of a hash function. A hash function is a function that narrows down the set of values of an object to a subset of integers. The Object class has a hashCode () method , which is used by the HashSet class to efficiently allocate objects to the collection. In classes of objects stored in a HashSet , this method must be overridden. HashSet is based on a hash table, and TreeSet is based on a binary tree. HashSet is much faster than TreeSet (constant time versus logarithmic for most operations like add , remove , contains), but TreeSet guarantees the ordering of objects. Both are not synchronized. Hashset - provides constant time for add () , remove () , contains () and size () - the order of the elements in the container may change - container iteration performance depends on capacity and “load factor” (it is recommended to leave the load factor as the default value of 0.75, which is a good compromise between the access time and the amount of stored data) Treeset - time for basic operations add () , remove () , contains () – log (n) - guarantees the order of the elements - does not provide any parameters for performance tuning - provides additional methods for an ordered list: first () , last () , headSet () , tailSet () , etc. Valid response to StackOverflow 16. What is the difference between HashMap and TreeMap? How do they work and work? What with the time of access to objects, which dependencies? In general, the answer about HashSet and TreeSet fits this question. HashMap runs strictly faster than TreeMap . TreeMap is implemented on a red-black tree, the time to add / search / delete an element is O (log N), where N is the number of elements in the TreeMap at the moment. With HashMap , the access time to an individual element is O (1), provided that the hash function ( Object.hashCode ()) is normally defined (which is true in the Integer case ). A general recommendation is to use HashMap if orderliness is not needed . The exception is the situation with real numbers, which are almost always very bad as keys. For them, you need to use a TreeMap , having first set a comparator for it, which compares real numbers as it is necessary in this task. For example, for ordinary geometric problems, two real numbers can be considered equal if they differ by no more than 1e-9. 17. What is a Hashtable, how does it differ from a HashMap? To date, it is deprecated, how to still use the necessary functionality? Some HashTable methods are synchronized, so it is slower than a HashMap . - HashTable is synchronized, but HashMap is not. - HashTable does not allow null keys or values. HashMap allows you to have one null key and as many null values as you like. - In HashMap is a subclass of LinkedHashMap , which adds the ability to iterate. If you need this functionality, you can easily switch between classes. General note – it is not recommended to use HashTable even in multi-threaded applications. For this there is a ConcurrentHashMap . 18. What will happen if we put two values with the same key in Map? The last value overwrites the previous one. 19. How is the order of objects in the collection, how to sort the collection? The TgeeMar class fully implements the SortedMap interface . It is implemented as a binary search tree, so its elements are stored in an ordered manner. This greatly speeds up the search for the desired item. The order is set either by the natural following of elements, or by an object that implements the Comparator comparison interface . There are four constructors in this class: TgeeMar () – creates an empty object with a natural order of elements; TreeMar (Comparator с) – creates an empty object, in which the order is given by the object of comparison with; ТеееМар (Map f) – creates an object containing all elements of the map f, with the natural order of its elements; TeeMar (SortedMap sf) – creates an object containing all the sf display elements in the same order. The Comparator interface describes two comparison methods: int compare (Object obj1, object obj2) – returns a negative number if obj1 is in some sense less than obj2 ; zero if they are considered equal; positive number if obj1 is greater than obj2 . For readers familiar with set theory, let’s say that this comparison method has the properties of identity, antisymmetry and transitivity; boolean equals (Object obj) – compares this object with the object obj , returning true if the objects match in any sense specified by this method. For each collection, you can implement these two methods by specifying a specific way of comparing elements, and define an object of the SortedMap class by the second constructor. Elements of the collection will be automatically sorted in the specified order. Java Collections Interview Questions And Answers For Experienced 20. Give the definition of “iterator”. Iterator – an object that allows you to iterate over the elements of the collection. For example, foreach is implemented using an iterator. One of the key methods of the Collection interface is the Iterator <E> iterator () method . It returns an iterator — that is, an object that implements the Iterator interface . The Iterator interface has the following definition: 21. What functionality does the Collections class represent? Some of the methods 22. How to get a non-modifiable collection? A read-only collection can be obtained using the methods: 23. What collections are synchronized? To do this, use the package Concurrent . And so @Deprecated HashTable , Vector . 24. How to get synchronized collection from non-synchronized? Use the following methods: - Collections.synchronizedList (list) ; - Collections.synchronizedSet (set) ; - Collections.synchronizedMap (map) ; They all take a collection as a parameter, and return a thread-safe collection with the same elements inside. Java Collection Interview Questions And Answers For 3 Years Experience 25. How to get a collection only for reading? Use the following methods: - Collections.unmodifiableList (list) ; - Collections.unmodifiableSet (set) ; - Collections.unmodifiableMap (map) ; They all take the collection as a parameter, and return the collection as read-only with the same elements inside. 26. Why is the Map not inherited from Collection? They are not compatible, because created for various data structures. Map uses a key-value pair. 27. What is the difference between Iterator and Enumeration? Enumeration is twice as fast as Iterator and uses less memory. Iterator is thread-safe because does not allow other threads to modify the collection when iterating. Enumeration can only be used for read-only collections. It also has no remove () method ; Enumeration: hasMoreElement () , nextElement () Iterator: hasNext () , next () , remove () 28. How is the foreach loop implemented? Implemented based on Iterator . 29. Why is there no iterator.add () method to add elements to the collection? The iterator’s only task is to iterate over the collection. Each collection has an add () method that you can use. It makes no sense to add this method to the iterator, because the collections can be ordered and unordered, and the add ()method should be organized differently. Java Collection Interview Questions And Answers For 5 year Experience 30. Why is there no method in the iterator class to get the next item without moving the cursor? An iterator is similar to a pointer with its main operations: it points to a separate item in the collection of objects (provides access to the item ) and contains functions for moving to another item in the list (the next or previous one). A container that implements support for iterators should provide the first element of the list, as well as the ability to check whether all elements of the container have been enumerated (whether the iterator is finite). Thus, without a cursor, it is simply impossible to realize an error-free movement around the collection. 31. What is the difference between Iterator and ListIterator? There are three differences: - Iterator can be used to iterate over the elements Set , List and Map . In contrast, the ListIterator can only be used to iterate through the elements of the List collection. - Iterator allows you to iterate over elements in one direction only, using the next () method . While the ListIterator allows you to iterate through the list in both directions, using the methods next () and previous () - With the ListIterator, you can modify the list by adding / removing elements using the add () and remove ()methods . Iterator does not support this functionality. 32. What are the ways to iterate through all the elements of the List? There are 4 ways: - Loop with iterator - For loop - Extended for loop - While loop 33. What is the difference between fail-safe and fail-fast properties? In contrast to fail-fast, the fail-safe iterators do not raise any exceptions when the structure is changed, because they work with a collection clone instead of the original. The CopyOnWriteArrayList collection iterator and the keySet iterator of the ConcurrentHashMap collection are examples of fail-safe iterators. 34. What should I do to prevent a ConcurrentModificationException? First of all, you can choose another iterator that works on the principle of fail-safe. For example, if you use List , you can take ListIterator . If you need an outdated collection, then use enumerators. In the event that the above does not suit you, you have three options: When using JDK 1.5 or higher, the classes ConcurrentHashMap and CopyOnWriteArrayList are suitable for you . This is the best option. You can convert a list into an array and iterate through an array. You can block list changes for the duration of the search using a synchronized block. Note that the last two options will negatively affect performance. 35. What is the stack and the queue, what are the differences between them? Collections created to store items for further processing. In addition to the basic operations of the Collection interface , queues support additional operations of adding, deleting, and checking the state of an item. Usually, but not necessarily, the queues work according to the FIFO principle – the first to come, the first to leave. The stack is almost like a queue, but it works on the principle of LIFO – the last to come, the first to leave. Regardless of the order of addition / removal, the head of the queue is an element that will be removed when calling the methods remove () or poll () . Also note that Stack and Vector are both thread safe. Usage: use the queue if you want to process the stream of elements in the same order in which they arrive. Good for job list and request processing. Use a stack if you want to put and delete items only from the top of the stack, which is useful in recursive algorithms. 36. What is the difference between Comparable and Comparator interfaces? In Java, all collections that support automatic sorting use comparison methods to properly sort items. As an example of such classes, we can specify a TreeSet , TreeMap , etc. In order to sort the elements, the class must implement the Comparator or Comparable interfaces. That is why wrapper classes like Integer , Double, and String implement the Comparable interface . The Comparable interface helps preserve natural sorting, whereas the Comparator allows you to sort items by different special patterns. A comparator instance is usually passed to the collection designer, if the collection supports it. It should be noted that the Comparable interface can be implemented by the elements of the collection or the Map keys, and the Comparator is implemented by a separate object (this is convenient since you can prepare several implementations for different sorting rules without changing the code of the Map collection / key elements). 37. Why the collections do not inherit the Cloneable and Serializable interfaces? Well, the simplest answer is “because it is not necessary.” The functionality provided by Cloneable and Serializable interfaces is simply not needed for collections. (It’s worth making an exception for ArrayList and LinkedList, which implement them). Another reason is that the Cloneable subclass is not always needed because each cloning operation consumes a lot of memory, and inexperienced programmers can spend it without understanding the consequences. And the last reason – cloning and serialization are very narrowly specific operations, and they need to be implemented only when necessary. Many collection classes implement these interfaces, but there is absolutely no need to lay them out for all collections in general. If you need cloning and serialization – just use the classes where it is, if not, the other classes. Must Read Java Interview Questions Books 2020 RELATED
https://codingcompiler.com/java-collections-interview-questions/
CC-MAIN-2021-10
en
refinedweb
Access AWS Support You can access the Support Center by using the following options: Use the email address and password associated with your AWS account. (Recommended) Use AWS Identity and Access Management (IAM). If you have a Business or Enterprise Support plan, you can also use the AWS Support API to access AWS Support and Trusted Advisor operations programmatically. For more information, see the AWS Support API Reference. AWS account You can sign in to the AWS Management Console and access the Support Center by using your AWS account email address and password. This identity is called the AWS account root user. However, we strongly recommend that you don't use the root user for your everyday tasks, even the administrative ones. Instead, we recommend that you use IAM, which lets you control who can perform certain tasks in your account. IAM By default, IAM users can't access the Support Center. You can use IAM to create individual users or groups. Then, you attach IAM policies to these entities, so that they have permission to perform actions and access resources, such as to open Support Center cases and use the AWS Support API. After you create IAM users, you can give those users individual passwords and an account-specific sign-in page. They can then sign in to your AWS account and work in the Support Center. IAM users who have AWS Support access can see all cases that are created for the account. For more information, see How IAM users sign in to your AWS account in the IAM User Guide. The easiest way to grant permissions is to attach the AWS managed policy AWSSupportAccess Resource element is always set to *. You can't allow or deny access to specific support cases. Example : Allow access to all AWS Support actions The AWS managed policy AWSSupportAccess { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["support:*"], "Resource": "*" } ] } For more information about how to attach the AWSSupportAccess policy to your entities, see Adding IAM identity permissions (console) in the IAM User Guide. Example : Allow access to all actions except the ResolveCase action You can also create customer managed policies in IAM to specify what actions to allow or deny. The following policy statement allows an IAM user to perform all actions in AWS Support except resolve a case. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "support:*", "Resource": "*" }, { "Effect": "Deny", "Action": "support:ResolveCase", "Resource": "*" }] } For more information about how to create a customer managed IAM policy, see Creating IAM policies (console) in the IAM User Guide. If the user or group already has a policy, you can add the AWS Support-specific policy statement to that policy. If you can't view cases in the Support Center, make sure that you have the required permissions. You might need to contact your IAM administrator. For more information, see Identity and access management for AWS Support. Access to AWS Trusted Advisor In the AWS Management Console, a separate trustedadvisor IAM namespace controls access to Trusted Advisor. In the AWS Support API, the support IAM namespace controls access to Trusted Advisor. For more information, see Manage access for AWS Trusted Advisor.
https://docs.aws.amazon.com/awssupport/latest/user/accessing-support.html
CC-MAIN-2021-10
en
refinedweb
Portfolio optimization problems involve identifying portfolios that satisfy three criteria: Minimize a proxy for risk. Match or exceed a proxy for return. Satisfy basic feasibility requirements. Portfolios are points from a feasible set of assets that constitute an asset universe. A portfolio specifies either holdings or weights in each individual asset in the asset universe. The convention is to specify portfolios in terms of weights, although the portfolio optimization tools work with holdings as well. The set of feasible portfolios is necessarily a nonempty, closed, and bounded set. The proxy for risk is a function that characterizes either the variability or losses associated with portfolio choices. The proxy for return is a function that characterizes either the gross or net benefits associated with portfolio choices. The terms “risk” and “risk proxy” and “return” and “return proxy” are interchangeable. The fundamental insight of Markowitz (see Portfolio Optimization) is that the goal of the portfolio choice problem is to seek minimum risk for a given level of return and to seek maximum return for a given level of risk. Portfolios satisfying these criteria are efficient portfolios and the graph of the risks and returns of these portfolios forms a curve called the efficient frontier. To specify a portfolio optimization problem, you need the following: Proxy for portfolio return (μ) Proxy for portfolio risk (σ) Set of feasible portfolios (X), called a portfolio set Financial Toolbox™ has three objects to solve specific types of portfolio optimization problems: The Portfolio object supports mean-variance portfolio optimization (see Markowitz [46], [47] at Portfolio Optimization). This object has either gross or net portfolio returns as the return proxy, the variance of portfolio returns as the risk proxy, and a portfolio set that is any combination of the specified constraints to form a portfolio set. The PortfolioCVaR object implements what is known as conditional value-at-risk portfolio optimization (see Rockafellar and Uryasev [48], [49] at Portfolio Optimization), which is generally referred to as CVaR portfolio optimization. CVaR portfolio optimization works with the same return proxies and portfolio sets as mean-variance portfolio optimization but uses conditional value-at-risk of portfolio returns as the risk proxy. The PortfolioMAD object implements what is known as mean-absolute deviation portfolio optimization (see Konno and Yamazaki [50] at Portfolio Optimization), which is referred to as MAD portfolio optimization. MAD portfolio optimization works with the same return proxies and portfolio sets as mean-variance portfolio optimization but uses mean-absolute deviation portfolio returns as the risk proxy. The proxy for portfolio return is a function on a portfolio set that characterizes the rewards associated with portfolio choices. Usually, the proxy for portfolio return has two general forms, gross and net portfolio returns. Both portfolio return forms separate the risk-free rate r0 so that the portfolio contains only risky assets. Regardless of the underlying distribution of asset returns, a collection of S asset returns y1,...,yS has a mean of asset returns and (sample) covariance of asset returns These moments (or alternative estimators that characterize these moments) are used directly in mean-variance portfolio optimization to form proxies for portfolio risk and return. The gross portfolio return for a portfolio is where: r0 is the risk-free rate (scalar). m is the mean of asset returns (n vector). If the portfolio weights sum to 1, the risk-free rate is irrelevant. The properties in the Portfolio object to specify gross portfolio returns are: RiskFreeRate for r0 AssetMean for m The net portfolio return for a portfolio is where: r0 is the risk-free rate (scalar). m is the mean of asset returns (n vector). b is the proportional cost to purchase assets (n vector). s is the proportional cost to sell assets (n vector). You can incorporate fixed transaction costs in this model also. Though in this case, it is necessary to incorporate prices into such costs. The properties in the Portfolio object to specify net portfolio returns are: RiskFreeRate for r0 AssetMean for m InitPort for x0 BuyCost for b SellCost for s The proxy for portfolio risk is a function on a portfolio set that characterizes the risks associated with portfolio choices. The variance of portfolio returns for a portfolio is where C is the covariance of asset returns ( n-by- n positive-semidefinite matrix). Covariance is a measure of the degree to which returns on two assets move in tandem. A positive covariance means that asset returns move together; a negative covariance means they vary inversely. The property in the Portfolio object to specify the variance of portfolio returns is AssetCovar for C. Although the risk proxy in mean-variance portfolio optimization is the variance of portfolio returns, the square root, which is the standard deviation of portfolio returns, is often reported and displayed. Moreover, this quantity is often called the “risk” of the portfolio. For details, see Markowitz (Portfolio Optimization). The conditional value-at-risk for a portfolio , which is also known as expected shortfall, is defined as where: α is the probability level such that 0 < α < 1. f(x,y) is the loss function for a portfolio x and asset return y. p(y) is the probability density function for asset return y. VaRα is the value-at-risk of portfolio x at probability level α. The value-at-risk is defined as An alternative formulation for CVaR has the form: The choice for the probability level α is typically 0.9 or 0.95. Choosing α implies that the value-at-risk VaRα(x) for portfolio x is the portfolio return such that the probability of portfolio returns falling below this level is ( 1 –α). Given VaRα(x) for a portfolio x, the conditional value-at-risk of the portfolio is the expected loss of portfolio returns above the value-at-risk return. Note Value-at-risk is a positive value for losses so that the probability level α indicates the probability that portfolio returns are below the negative of the value-at-risk. To describe the probability distribution of returns, the PortfolioCVaR object takes a finite sample of return scenarios ys, with s = 1,...,S. Each ys is an n vector that contains the returns for each of the n assets under the scenario s. This sample of S scenarios is stored as a scenario matrix of size S-by-n. Then, the risk proxy for CVaR portfolio optimization, for a given portfolio and , is computed as The value-at-risk, VaRα(x), is estimated whenever the CVaR is estimated. The loss function is , which is the portfolio loss under scenario s. Under this definition, VaR and CVaR are sample estimators for VaR and CVaR based on the given scenarios. Better scenario samples yield more reliable estimates of VaR and CVaR. For more information, see Rockafellar and Uryasev [48], [49], and Cornuejols and Tütüncü, [51], at Portfolio Optimization. The mean-absolute deviation (MAD) for a portfolio is defined as where: ys are asset returns with scenarios s = 1,...S (S collection of n vectors). f(x,y) is the loss function for a portfolio x and asset return y. m is the mean of asset returns (n vector). such that For more information, see Konno and Yamazaki [50] at Portfolio Optimization.
https://uk.mathworks.com/help/finance/portfolio-optimization-theory-cvar.html
CC-MAIN-2021-10
en
refinedweb
Fast and Easy Infinite Neural Networks in Python Project description Neural Tangents ICLR 2020 Video | Paper | Quickstart | Install guide | Reference docs | Release notes Overview Neural Tangents is a high-level neural network API for specifying complex, hierarchical, neural networks of both finite and infinite width. Neural Tangents allows researchers to define, train, and evaluate infinite networks as easily as finite ones. Infinite (in width or channel count) neural networks are Gaussian Processes (GPs) with a kernel function determined by their architecture (see References for details and nuances of this correspondence). Neural Tangents allows you to construct a neural network model with the usual building blocks like convolutions, pooling, residual connections, nonlinearities etc. and obtain not only the finite model, but also the kernel function of the respective GP. The library is written in python using JAX and leveraging XLA to run out-of-the-box on CPU, GPU, or TPU. Kernel computation is highly optimized for speed and memory efficiency, and can be automatically distributed over multiple accelerators with near-perfect scaling. Neural Tangents is a work in progress. We happily welcome contributions! Contents - Colab Notebooks - Installation - 5-Minute intro - Package description - Technical gotchas - Training dynamics of wide but finite networks - Performance - Papers - Citation - References Colab Notebooks An easy way to get started with Neural Tangents is by playing around with the following interactive notebooks in Colaboratory. They demo the major features of Neural Tangents and show how it can be used in research. - Neural Tangents Cookbook - Weight Space Linearization - Function Space Linearization - Neural Network Phase Diagram - Performance Benchmark : Simple benchmark for Myrtle kernels used in [16]. Also see Performance. Installation To use GPU, first follow JAX's GPU installation instructions. Otherwise, install JAX on CPU by running pip install jax jaxlib --upgrade Once JAX is installed install Neural Tangents by running pip install neural-tangents or, to use the bleeding-edge version from GitHub source, git clone; cd neural-tangents pip install -e . You can now run the examples (using tensorflow_datasets) and tests by calling: pip install tensorflow tensorflow-datasets more-itertools --upgrade python examples/infinite_fcn.py python examples/weight_space.py python examples/function_space.py set -e; for f in tests/*.py; do python $f; done 5-Minute intro See this Colab for a detailed tutorial. Below is a very quick introduction. Our library closely follows JAX's API for specifying neural networks, stax. In stax a network is defined by a pair of functions (init_fn, apply_fn) initializing the trainable parameters and computing the outputs of the network respectively. Below is an example of defining a 3-layer network and computing it's outputs y given inputs x. from jax import random from jax.experimental import stax init_fn, apply_fn = stax.serial( stax.Dense(512), stax.Relu, stax.Dense(512), stax.Relu, stax.Dense(1) ) key = random.PRNGKey(1) x = random.normal(key, (10, 100)) _, params = init_fn(key, input_shape=x.shape) y = apply_fn(params, x) # (10, 1) np.ndarray outputs of the neural network Neural Tangents is designed to serve as a drop-in replacement for stax, extending the (init_fn, apply_fn) tuple to a triple (init_fn, apply_fn, kernel_fn), where kernel_fn is the kernel function of the infinite network (GP) of the given architecture. Below is an example of computing the covariances of the GP between two batches of inputs x1 and x2. from jax import random from neural_tangents import stax init_fn, apply_fn, kernel_fn = stax.serial( stax.Dense(512), stax.Relu(), stax.Dense(512), stax.Relu(), stax.Dense(1) ) key1, key2 = random.split(random.PRNGKey(1)) x1 = random.normal(key1, (10, 100)) x2 = random.normal(key2, (20, 100)) kernel = kernel_fn(x1, x2, 'nngp') Note that kernel_fn can compute two covariance matrices corresponding to the Neural Network Gaussian Process (NNGP) and Neural Tangent (NT) kernels respectively. The NNGP kernel corresponds to the Bayesian infinite neural network [1-5]. The NTK corresponds to the (continuous) gradient descent trained infinite network [10]. In the above example, we compute the NNGP kernel but we could compute the NTK or both: # Get kernel of a single type nngp = kernel_fn(x1, x2, 'nngp') # (10, 20) np.ndarray ntk = kernel_fn(x1, x2, 'ntk') # (10, 20) np.ndarray # Get kernels as a namedtuple both = kernel_fn(x1, x2, ('nngp', 'ntk')) both.nngp == nngp # True both.ntk == ntk # True # Unpack the kernels namedtuple nngp, ntk = kernel_fn(x1, x2, ('nngp', 'ntk')) Additionally, if no third-argument is specified then the kernel_fn will return a Kernel namedtuple that contains additional metadata. This can be useful for composing applications of kernel_fn as follows: kernel = kernel_fn(x1, x2) kernel = kernel_fn(kernel) print(kernel.nngp) Doing inference with infinite networks trained on MSE loss reduces to classical GP inference, for which we also provide convenient tools: import neural_tangents as nt x_train, x_test = x1, x2 y_train = random.uniform(key1, shape=(10, 1)) # training targets predict_fn = nt.predict.gradient_descent_mse_ensemble(kernel_fn, x_train, y_train) y_test_nngp = predict_fn(x_test=x_test, get='nngp') # (20, 1) np.ndarray test predictions of an infinite Bayesian network y_test_ntk = predict_fn(x_test=x_test, get='ntk') # (20, 1) np.ndarray test predictions of an infinite continuous # gradient descent trained network at convergence (t = inf) # Get predictions as a namedtuple both = predict_fn(x_test=x_test, get=('nngp', 'ntk')) both.nngp == y_test_nngp # True both.ntk == y_test_ntk # True # Unpack the predictions namedtuple y_test_nngp, y_test_ntk = predict_fn(x_test=x_test, get=('nngp', 'ntk')) Infinitely WideResnet We can define a more compex, (infinitely) Wide Residual Network [14] using the same nt.stax building blocks: from neural_tangents import stax def WideResnetBlock(channels, strides=(1, 1), channel_mismatch=False): Main = stax.serial( stax.Relu(), stax.Conv(channels, (3, 3), strides, padding='SAME'), stax.Relu(), stax.Conv(channels, (3, 3), padding='SAME')) Shortcut = stax.Identity() if not channel_mismatch else stax.Conv( channels, (3, 3), strides, padding='SAME') return stax.serial(stax.FanOut(2), stax.parallel(Main, Shortcut), stax.FanInSum()) def WideResnetGroup(n, channels, strides=(1, 1)): blocks = [] blocks += [WideResnetBlock(channels, strides, channel_mismatch=True)] for _ in range(n - 1): blocks += [WideResnetBlock(channels, (1, 1))] return stax.serial(*blocks) def WideResnet(block_size, k, num_classes): return stax.serial( stax.Conv(16, (3, 3), padding='SAME'), WideResnetGroup(block_size, int(16 * k)), WideResnetGroup(block_size, int(32 * k), (2, 2)), WideResnetGroup(block_size, int(64 * k), (2, 2)), stax.AvgPool((8, 8)), stax.Flatten(), stax.Dense(num_classes, 1., 0.)) init_fn, apply_fn, kernel_fn = WideResnet(block_size=4, k=1, num_classes=10) Package description The neural_tangents ( nt) package contains the following modules and functions: stax- primitives to construct neural networks like Conv, Relu, serial, paralleletc. predict- predictions with infinite networks: predict.gradient_descent_mse- inference with a single infinite width / linearized network trained on MSE loss with continuous gradient descent for an arbitrary finite or infinite ( t=None) time. Computed in closed form. predict.gradient_descent- inference with a single infinite width / linearized network trained on arbitrary loss with continuous (momentum) gradient descent for an arbitrary finite time. Computed using an ODE solver. predict.gradient_descent_mse_ensemble- inference with an infinite ensemble of infinite width networks, either fully Bayesian ( get='nngp') or inference with MSE loss using continuous gradient descent ( get='ntk'). Finite-time Bayesian inference (e.g. t=1., get='nngp') is interpreted as gradient descent on the top layer only [11], since it converges to exact Gaussian process inference with NNGP ( t=None, get='nngp'). Computed in closed form. predict.gp_inference- exact closed form Gaussian process inference using NNGP ( get='nngp'), NTK ( get='ntk'), or both ( get=('nngp', 'ntk')). Equivalent to predict.gradient_descent_mse_ensemblewith t=None(infinite training time), but has a slightly different API (accepting precomputed kernel matrix k_train_traininstead of kernel_fnand x_train). monte_carlo_kernel_fn- compute a Monte Carlo kernel estimate of any (init_fn, apply_fn), not necessarily specified via nt.stax, enabling the kernel computation of infinite networks without closed-form expressions. Tools to investigate training dynamics of wide but finite neural networks, like linearize, taylor_expand, empirical_kernel_fnand more. See Training dynamics of wide but finite networks for details. Technical gotchas nt.stax vs jax.experimental.stax We remark the following differences between our library and the JAX one. - All nt.staxlayers are instantiated with a function call, i.e. nt.stax.Relu()vs jax.experimental.stax.Relu. - All layers with trainable parameters use the NTK parameterization by default (see [10], Remark 1). However, Dense and Conv layers also support the standard parameterization via a parameterizationkeyword argument (see [15]). nt.staxand jax.experimental.staxmay have different layers and options available (for example nt.staxlayers support CIRCULARpadding, have LayerNorm, but no BatchNorm.). CPU and TPU performance For CNNs w/ pooling, our CPU and TPU performance is suboptimal due to low core utilization (10-20%, looks like an XLA:CPU issue), and excessive padding respectively. We will look into improving performance, but recommend NVIDIA GPUs in the meantime. See Performance. Training dynamics of wide but finite networks The kernel of an infinite network kernel_fn(x1, x2).ntk combined with nt.predict.gradient_descent_mse together allow to analytically track the outputs of an infinitely wide neural network trained on MSE loss througout training. Here we discuss the implications for wide but finite neural networks and present tools to study their evolution in weight space (trainable parameters of the network) and function space (outputs of the network). Weight space Continuous gradient descent in an infinite network has been shown in [11] to correspond to training a linear (in trainable parameters) model, which makes linearized neural networks an important subject of study for understanding the behavior of parameters in wide models. For this, we provide two convenient functions: nt.linearize, and nt.taylor_expand, which allow to linearize or get an arbitrary-order Taylor expansion of any function apply_fn(params, x) around some initial parameters params_0 as apply_fn_lin = nt.linearize(apply_fn, params_0). One can use apply_fn_lin(params, x) exactly as you would any other function (including as an input to JAX optimizers). This makes it easy to compare the training trajectory of neural networks with that of its linearization. Previous theory and experiments have examined the linearization of neural networks from inputs to logits or pre-activations, rather than from inputs to Example: import jax.numpy as np import neural_tangents as nt def apply_fn(params, x): W, b = params return np.dot(x, W) + b W_0 = np.array([[1., 0.], [0., 1.]]) b_0 = np.zeros((2,)) apply_fn_lin = nt.linearize(apply_fn, (W_0, b_0)) W = np.array([[1.5, 0.2], [0.1, 0.9]]) b = b_0 + 0.2 x = np.array([[0.3, 0.2], [0.4, 0.5], [1.2, 0.2]]) logits = apply_fn_lin((W, b), x) # (3, 2) np.ndarray Function space: Outputs of a linearized model evolve identically to those of an infinite one [11] but with a different kernel - specifically, the Neural Tangent Kernel [10] evaluated on the specific apply_fn of the finite network given specific params_0 that the network is initialized with. For this we provide the nt.empirical_kernel_fn function that accepts any apply_fn and returns a kernel_fn(x1, x2, get, params) that allows to compute the empirical NTK and/or NNGP (based on get) kernels on specific params. Example: import jax.random as random import jax.numpy as np import neural_tangents as nt def apply_fn(params, x): W, b = params return np.dot(x, W) + b W_0 = np.array([[1., 0.], [0., 1.]]) b_0 = np.zeros((2,)) params = (W_0, b_0) key1, key2 = random.split(random.PRNGKey(1), 2) x_train = random.normal(key1, (3, 2)) x_test = random.normal(key2, (4, 2)) y_train = random.uniform(key1, shape=(3, 2)) kernel_fn = nt.empirical_kernel_fn(apply_fn) ntk_train_train = kernel_fn(x_train, None, 'ntk', params) ntk_test_train = kernel_fn(x_test, x_train, 'ntk', params) mse_predictor = nt.predict.gradient_descent_mse(ntk_train_train, y_train) t = 5. y_train_0 = apply_fn(params, x_train) y_test_0 = apply_fn(params, x_test) y_train_t, y_test_t = mse_predictor(t, y_train_0, y_test_0, ntk_test_train) # (3, 2) and (4, 2) np.ndarray train and test outputs after `t` units of time # training with continuous gradient descent What to Expect The success or failure of the linear approximation is highly architecture dependent. However, some rules of thumb that we've observed are: Convergence as the network size increases. For fully-connected networks one generally observes very strong agreement by the time the layer-width is 512 (RMSE of about 0.05 at the end of training). For convolutional networks one generally observes reasonable agreement agreement by the time the number of channels is 512. Convergence at small learning rates. With a new model it is therefore advisable to start with a very large model on a small dataset using a small learning rate. Performance In the table below we measure time to compute a single NTK entry in a 21-layer CNN ( 3x3 filters, no strides, SAME padding, ReLU) on inputs of shape 3x32x32. Precisely: layers = [] for _ in range(21): layers += [stax.Conv(1, (3, 3), (1, 1), 'SAME'), stax.Relu()] CNN with pooling Top layer is stax.GlobalAvgPool(): _, _, kernel_fn = stax.serial(*(layers + [stax.GlobalAvgPool()])) CNN without pooling Top layer is stax.Flatten(): _, _, kernel_fn = stax.serial(*(layers + [stax.Flatten()])) Tested using version 0.2.1. All GPU results are per single accelerator. Note that runtime is proportional to the depth of your network. If your performance differs significantly, please file a bug! Myrtle network Colab notebook Performance Benchmark demonstrates how one would construct and benchmark kernels. To demonstrate flexibility, we took architecture from [16] as an example. With NVIDIA V100 64-bit precision, nt took 316/330/508 GPU-hours on full 60k CIFAR-10 dataset for Myrtle-5/7/10 kernels. Papers Neural Tangents has been used in the following papers: - Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks - Dataset Meta-Learning from Kernel Ridge-Regression - Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel - Stable ResNet - Label-Aware Neural Tangent Kernel: Toward Better Generalization and Local Elasticity - Semi-supervised Batch Active Learning via Bilevel Optimization - Temperature check: theory and practice for training models with softmax-cross-entropy losses - Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active Learning - How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks - Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit - Cold Posteriors and Aleatoric Uncertainty - Asymptotics of Wide Convolutional Neural Networks - Finite Versus Infinite Neural Networks: an Empirical Study - Bayesian Deep Ensembles via the Neural Tangent Kernel - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks - When Do Neural Networks Outperform Kernel Methods? - Statistical Mechanics of Generalization in Kernel Regression - Exact posterior distributions of wide Bayesian neural networks - Infinite attention: NNGP and NTK for deep attention networks - Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains - Finding trainable sparse networks through Neural Tangent Transfer - Coresets via Bilevel Optimization for Continual Learning and Streaming - On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization - The large learning rate phase of deep learning: the catapult mechanism - Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks - Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width - On the Infinite Width Limit of Neural Networks with a Standard Parameterization - Disentangling Trainability and Generalization in Deep Learning - Information in Infinite Ensembles of Infinitely-Wide Neural Networks - Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel - Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent - Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes Please let us know if you make use of the code in a publication and we'll add it to the list! Citation If you use the code in a publication, please cite our ICLR 2020 paper: @inproceedings{neuraltangents2020, title={Neural Tangents: Fast and Easy Infinite Neural Networks in Python}, author={Roman Novak and Lechao Xiao and Jiri Hron and Jaehoon Lee and Alexander A. Alemi and Jascha Sohl-Dickstein and Samuel S. Schoenholz}, booktitle={International Conference on Learning Representations}, year={2020}, url={} } References [1] Priors for Infinite Networks [2] Exponential expressivity in deep neural networks through transient chaos [3] Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity [4] Deep Information Propagation [5] Deep Neural Networks as Gaussian Processes [6] Gaussian Process Behaviour in Wide Deep Neural Networks [7] Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks. [8] Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes [9] Deep Convolutional Networks as shallow Gaussian Processes [10] Neural Tangent Kernel: Convergence and Generalization in Neural Networks [11] Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent [12] Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation [13] Mean Field Residual Networks: On the Edge of Chaos [14] Wide Residual Networks [15] On the Infinite Width Limit of Neural Networks with a Standard Parameterization [16] Neural Kernels Without Tangents Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/neural-tangents/
CC-MAIN-2021-10
en
refinedweb
Created 08-22-2017 02:47 PM Hi, I use to run this API command (Atlas 0.6) to fetch the schema details from the Atlas. My whole purpose was to identify the tagged and not tagged columns from the huge JSON output. In Atlas 0.6, For tags, One new key value pair use to get added for tagged columns, Key Name was '$traits$'.In Atlas 0.7, I am not getting any classifier for tagged and not tagged columns. Basically the Rest End point was mentioned below. atlas_hostname + '/api/atlas/discovery/search/dsl?query=hive_table+where+db.name%3D%22'+ database + '%22&limit=10000' Whole Code is here : atlas_hostname = 'hostname:21000' database = 'mydb' atlas_user_name = 'myuser' atlas_password = 'mypassword' headers = {'Accept': 'application/json, text/plain, */*','Content-Type': 'application/json; charset=UTF-8'} def atlas_get_request(atlas_hostname,database, atlas_user_name,atlas_password): try: list_of_table_json=requests.get(atlas_hostname + '/api/atlas/discovery/search/dsl?query=hive_table+where+db.name%3D%22'+ database + '%22&limit=10000' , headers=headers, auth=(atlas_user_name,atlas_password)) dict_of_tables= list_of_table_json.json() result=dict_of_tables['results'] return result except requests.exceptions.HTTPError as err: print (err) return ("invalid Credentials or UI is unresponsive") I use to invoke the function atlas get request to fetch the schema details result = atlas_get_request(atlas_hostname,database, atlas_user_name,atlas_password) From the huge JSON response saved in "result", i used to filter the column key value pairs, Tagged column used to have an extra Key Value pair. Extra Key was "$traits$" in Atlas 0.6. Now in the column array of each and every column (in Atlas 0.7) i am getting only 9 key value pairs. No key value pairs which has the Tag details. Keys of the JSON dict mentioned below (tagged and untagged Column). [u'comment', u'$id$', u'qualifiedName', u'description', u'$typeName$', u'owner', u'table', u'type', u'name'] I am using apache atlas 0.7. Let me know how to overcome this issue. Thank you in advance, Subash Created 08-29-2017 03:03 PM Hi Subash, I've tried to perform a search just like yours and you're right: it's not showing $traits$ key. I tried something that worked for me: in your Rest End Point change the %22 to %27, just like this: atlas_hostname + '/api/atlas/discovery/search/dsl?query=hive_table+where+db.name%3D%27'+ database + '%27&limit=10000' I think it'll work. Best Regards, Ivan.
https://community.cloudera.com/t5/Support-Questions/I-am-not-getting-traits-key-in-Rest-API-Call-Apache-Atlas-0/m-p/232289
CC-MAIN-2021-10
en
refinedweb
One of the more interesting topics that arise in Data-binding is that of Data Conversion. So much can be accomplished without explicit conversion that it is easy to overlook… until you hit the real world. To see this, I’ll make a small modification to the code I wrote for the Silverlight Tutorial on Data Binding. The premise of this simplified example is that you are interacting with a bookstore or a library, displaying information about one book at a time. We assume that you get the book information from wherever it is held via a web service, but that you will create Business Objects from that data, and it is these business objects that you will bind to (this is not the only way to do it, but it is the most flexible). Briefly, the architecture in the original example looked like this What we’re trying to accomplish here: bind the data from a Book object to a Silverlight control (textBlock, etc.) The Book class is the business object. It is presumably filled from a call to a web-service, and the properties of the book are bound to the UI objects. As discussed yesterday this is accomplished by setting the binding in the Xaml, <TextBlock x:Name=”Title” Text=”{ Binding Title, Mode=OneWay }” /> The value that is bound, Title, is a property of the Book class 1: public class Book : INotifyPropertyChanged 2: { 3: private string bookTitle; 4: //... 5: 6: public string Title 7: { 8: get { return bookTitle; } 9: set 10: { 11: bookTitle = value; 12: NotifyPropertyChanged( "Title" ); 13: } // end set 14: } // end property 15: //... 16: In this case the binding target is a TextBlock also named Title. The Mode on the Binding is set to OneWay indicating that the Title will come from the BindingSource (the Book object) to be bound to the TextBlock, but changes made in the UI will not be sent back to the Binding Source. <TextBlock x:Name=”Title” Text=”{ Binding Title, Mode=OneWay }” /> Converting Data – Dates for a Calendar If the BookObject had a property for publication date that was of type DateTime, public DateTime PublicationDate { get {//…} set {//…} } You could easily bind it to a Calendar object, <basics:Calendar x:Name=”pubDateCal” IsTodayHighlighted=”False” SelectedDate=”{ Binding PublicationDate }” DisplayDate= “{ Binding PublicationDate }” /> … A calendar control expects a DateTime for its Selected and Display date Dependency Properties. But what do you do if the service you are using does not provide that for you. What if, instead, it provides only a month and a year and yet, you’d still like to show the calendar? You have a number of options. Certainly, you could have the Book convert the month and year into a DateTime before the binding, creating a PublicationDate property that returns a DateTime ready to be bound. An alternative is to create a converter that takes a month and a year and converts it to the type needed by your control when displaying it, and converts it back when storing data to the business object. Okay, to make the contrivance just a bit worse, Converters are happier taking a single type and converting to another type, so we’re going to have the Book class maintain its publication Month and Year as integers, but I’ve taken the liberty™ of having the PublicationDate property return a new class MonthAndYear that we can then convert to DateTime. This is either due to spec requirements beyond the scope of this article, or to my wanting a simple example that has now gotten completely out of hand. In any case, here is the class we’ll use: 1: public class MonthAndYear 2: { 3: public int Month { get; set; } 4: public int Year { get; set; } 5: public MonthAndYear( int m, int y ) 6: { 7: Month = m; 8: Year = y; 9: } 10: } Value Converters have real value You can imagine many plausible situations where you might decide that your business object has data in a form that does need conversion and yet the business class ought not be doing that conversion. It is not unreasonable to delegate that responsibility to the UI level if it is only the UI level that needs the conversion. Silverlight provides an Interface IValueConverter, which is designed for this situation. The interface requires that you implement the Convert and the ConvertBack methods, and our architecture becomes more like this, The Book class is modified to add the members but this time they do not each have their own property, but rather there is a single property, PublicationDate that manages them via the creation or reception of a MonthAndYear object, 1: public class Book : INotifyPropertyChanged 2: { 3: private string bookTitle; 4: private int pubMonth; 5: private int pubYear; 6: // other members 7: 8: public MonthAndYear PublicationDate 9: { 10: get { return new MonthAndYear( pubMonth, pubYear ); } 11: set 12: { 13: pubMonth = value.Month; 14: pubYear = value.Year; 15: NotifyPropertyChanged( "PublicationDate" ); 16: } 17: } 18: This greatly simplifies matters as we can not convert from one type (MonthAndYear) to another DateTime (and back, if we like). Here is the complete DateToMonthAndYear.cs 1: using System; 2: using System.Windows.Data; 3: 4: namespace DataBinding 5: { 6: public class DateToMonthYear : IValueConverter 7: { 8: 9: public object Convert( 10: object value, // source data 11: Type targetType, // type of dataa expected by target d.p. 12: object parameter, // optional param to help conversion 13: System.Globalization.CultureInfo culture ) 14: { 15: MonthAndYear temp = value as MonthAndYear; 16: if ( temp != null ) 17: { 18: return new DateTime( temp.Year, temp.Month, 1 ); 19: } 20: else 21: { 22: throw new ArgumentException( "DateToMonthYear conversion failed," + 23: "trying to convert " + value.ToString() ); 24: } 25: 26: } 27: 28: 29: public object ConvertBack( 30: object value, 31: Type targetType, 32: object parameter, 33: System.Globalization.CultureInfo culture ) 34: { 35: MonthAndYear retVal = null; 36: try 37: { 38: DateTime dt = (DateTime) value; 39: retVal = new MonthAndYear( dt.Month, dt.Year ); 40: } 41: catch ( Exception e ) 42: { 43: throw new ArgumentException( "Exception thrown in DateToMonthYear.ConvertBack. Value = " + 44: value.ToString(), e ); 45: } 46: return retVal; 47: } 48: } 49: } What we’re trying to accomplish: The book class now has a property, PublicationDate that returns a MonthAndYear, we want to bind that to a Calendar that expects a DateTime object. The Converter will make the conversion back and forth. We modify Page.xaml like this, 1: <UserControl x:Class="DataBinding.Page" xmlns="" xmlns:x="" 2: xmlns:local="clr-namespace:DataBinding" Width="433" Height="416" xmlns:d="" 3: xmlns:mc="" mc:Ignorable="d" 4: xmlns: 5: <UserControl.Resources> 6: <local:DateToMonthYear x: 7: </UserControl.Resources> 8: <Grid x: 9: <!-- Snip! --> 10: <basics:Calendar x:Name="pubDateCal" 11: FontFamily="Georgia" 12: Grid.Column="1" Grid.Row="5" Grid.RowSpan="3" 13: IsTodayHighlighted="False" 14: VerticalAlignment="Top" HorizontalAlignment="Left" 15: SelectedDate="{ Binding PublicationDate, Mode=OneWay, 16: Converter={StaticResource Converter1} }" 17: DisplayDate= "{ Binding PublicationDate, Mode=OneWay, 18: Converter={StaticResource Converter1} }" /> 19: 20: </Grid> 21: </UserControl> We create two new namespaces. On line 4 basics is created to bring in the System.Windows.Controls namespace and is used on line 10 to create the Calendar. On line 2 local is created to create a namespace for the application itself. On lines 5 through 7 a Resources section is created, inside of which a resource of type DateToMonthYear is declared using the local namespace; this ties the resource to the class we created, and assigns it a static resource key of converter1. The SelectionDate and the DisplayDate of the Calendar control are bound to the PublicationDate and given the Converter to use, identified using the resource by that key (converter1) on lines 16 and 18. Putting It Together When the Binding engine tries to bind PublicationDate to the SelectedDate or DisplayDay Dependency Properties of the Calendar, it will see that the converter is referencing the staticResource whose key is Converter1. It will look at the resource section and see that the key resolves to a namespace that identifies which assembly to look in, and an identifier that tells the Binding engine which class to use. That class must implement IValueConverter, and it will call Convert, passing in the MonthAdnYear object and getting back a DateTime that it will then use to bind to the Dependency Properties. Piece of cake. The example is a bit contrived, but that allows for some simplicity in tracing it through. The source code is here. Thanks. -j
https://jesseliberty.com/2008/10/12/data-binding-%E2%80%93-data-conversion/
CC-MAIN-2021-10
en
refinedweb
Some people who use Android devices have different accessibility needs than others. To assist a given group of the population who shares an accessibility need, the Android framework provides the ability for developers to create an accessibility service, which presents apps' content to them and operates apps on their behalf. Android provides several system accessibility services, including the following: - TalkBack: Helps people who have low vision or are blind. Announces content through a synthesized voice, and performs actions on an app in response to user gestures. - Switch Access: Helps people who have motor disabilities. Highlights interactive elements, and performs actions in response to the user pressing a button. Allows for controlling the device using only 1 or 2 buttons. In order to help people with accessibility needs use your app successfully, your app should follow the best practices described on this page, which build upon the key guidelines described in Make apps more accessible. Each of the following best practices can further improve your app's accessibility: - Label elements - Users should be able to understand the content and purpose of each interactive and meaningful UI element within your app. - Use or extend system widgets - Build off of the view elements that the framework includes, rather than creating your own custom views. The framework's view and widget classes already provide most of the accessibility capabilities that your app needs. - Use cues other than color - Users should be able to clearly distinguish between categories of elements in a UI. To do so, use patterns and position, along with color, to express these differences. - Make media content more accessible - Try to add descriptions to your app's video or audio content so that users who consume this content don't need to rely on entirely visual or aural cues. Label elements It's important to provide users with useful and descriptive labels for each interactive UI element in your app. Each label should explain the meaning and purpose of a particular element. Screen readers such as TalkBack can announce these labels to users who rely on these services. In most cases, you specify a given UI element's description in the layout resource file that contains this element. Although you usually add labels using the contentDescription attribute, as explained in the guide to making apps more accessible, there are several other labeling techniques to keep in mind, as described in the following sections. Editable elements When labeling editable elements, such as EditText objects, it's helpful to show text that gives examples of valid input in the element itself, in addition to making this example text available to screen readers. In these situations, you can use the android:hint attribute, as shown in the following snippet: <!-- The hint text for en-US locale would be "Apartment, suite, or building". --> <EditText android:id="@+id/addressLine2" android:hint="@string/aptSuiteBuilding" ... /> In this situation, the View object should have its android:labelFor attribute set to the ID of the EditText element. For more details, see the section describing how to label pairs of elements where one describes the other. Pairs of elements where one describes the other It's common for a given EditText element to have a corresponding View object that describes the content that users should enter within the EditText element. On devices that run Android 4.2 (API level 17) or higher, you can indicate this relationship by setting the View object's android:labelFor attribute. An example of labeling such element pairs appears in the following snippet: <!-- Label text for en-US locale would be "Username:" --> <TextView android: <EditText android: <EditText android:id="@+id/passwordEntry" android:inputType="textPassword" ... /> Elements in a collection When adding labels to the elements of a collection, each label should be unique. That way, the system's accessibility services can refer to exactly one on-screen element when announcing a label. This correspondence lets users know when they've cycled through the UI or when they've moved focus to an element that they've already discovered. In particular, you should include additional text or contextual information in elements within reused layouts, such as RecyclerView objects, so that each child element is uniquely identified. To do so, set the content description as part of your adapter implementation, as shown in the following code snippet: Kotlin data class MovieRating(val title: String, val starRating: Integer) class MyMovieRatingsAdapter(private val myData: Array<MovieRating>): RecyclerView.Adapter<MyMovieRatingsAdapter.MyRatingViewHolder>() { class MyRatingViewHolder(val ratingView: ImageView) : RecyclerView.ViewHolder(ratingView) override fun onBindViewHolder(holder: MyRatingViewHolder, position: Int) { val ratingData = myData[position] holder.ratingView.contentDescription = "Movie ${position}: " + "${ratingData.title}, ${ratingData.starRating} stars" } } Java public class MovieRating { private String title; private int starRating; // ... public String getTitle() { return title; } public int getStarRating() { return starRating; } } public class MyMovieRatingsAdapter extends RecyclerView.Adapter<MyAdapter.MyRatingViewHolder> { private MovieRating[] myData; public static class MyRatingViewHolder extends RecyclerView.ViewHolder { public ImageView ratingView; public MyRatingViewHolder(ImageView iv) { super(iv); ratingView = iv; } } @Override public void onBindViewHolder(MyRatingViewHolder holder, int position) { MovieRating ratingData = myData[position]; holder.ratingView.setContentDescription("Movie " + position + ": " + ratingData.getTitle() + ", " + ratingData.getStarRating() + " stars") } } Groups of related content If your app displays several UI elements that form a natural group, such as details of a song or attributes of a message, arrange these elements within a container, which is usually a subclass of ViewGroup. Set the container object's android:screenReaderFocusable attribute to true, and each inner object's android:focusable attribute to false. In doing so, accessibility services can present the inner elements' content descriptions, one after the other, in a single announcement. This consolidation of related elements helps users of assistive technology discover the information that's on the screen more efficiently. The following snippet contains pieces of content that relate to one another, so the container element, an instance of ConstraintLayout, has its android:screenReaderFocusable attribute set to true, and the inner TextView elements each has its android:focusable attribute set to false: <!-- In response to a single user interaction, accessibility services announce both the title and the artist of the song. --> <ConstraintLayout android: <TextView android: <TextView android: </ConstraintLayout> Because accessibility services announce the inner elements' descriptions in a single utterance, it's important to keep each description as short as possible while still conveying the element's meaning. Custom group label If desired, you can override the platform's default grouping and ordering of a group's inner element descriptions by providing a content description for the group itself. The following snippet shows an example of customized group description: <!-- In response to a single user interaction, accessibility services announce the custom content description for the group. --> <ConstraintLayout android: <TextView android: <TextView android: </ConstraintLayout> Nested groups If your app's interface presents multi-dimensional information, such as a day-by-day list of festival events, use the android:screenReaderFocusable attribute on the inner group containers. This labeling scheme provides a good balance between number of announcements needed to discover the screen's content and the length of each announcement. The following code snippet shows one method of labeling groups inside of larger groups: <!-- In response to a single user interaction, accessibility services announce the events for a single stage only. --> <ConstraintLayout android: <!-- UI elements that describe the events on Stage A. --> </ConstraintLayout> <ConstraintLayout android: <!-- UI elements that describe the events on Stage B. --> </ConstraintLayout> </ConstraintLayout> Headings within text Some apps use headings to summarize groups of text that appear on screen. If a particular View element represents a heading, you can indicate its purpose for accessibility services by setting the element's android:accessibilityHeading attribute to true. Users of accessibility services can choose to navigate between headings instead of between paragraphs or between words. This flexibility improves the text navigation experience. Accessibility pane titles In Android 9 (API level 28) and higher, you can provide accessibility-friendly titles for a screen's panes. For accessibility purposes, a pane is a visually distinct portion of a window, such as the contents of a fragment. In order for accessibility services to understand a pane's window-like behavior, you should give descriptive titles to your app's panes. Accessibility services can then provide more granular information to users when a pane's appearance or content changes. To specify the title of a pane, use the android:accessibilityPaneTitle attribute, as shown in the following snippet: <!-- Accessibility services receive announcements about content changes that are scoped to either the "shopping cart view" section (top) or "browse items" section (bottom) --> <MyShoppingCartView android:id="@+id/shoppingCartContainer" android:accessibilityPaneTitle="@string/shoppingCart" ... /> <MyShoppingBrowseView android:id="@+id/browseItemsContainer" android:accessibilityPaneTitle="@string/browseProducts" ... /> Decorative elements If an element in your UI exists only for visual spacing or visual appearance purposes, set its android:contentDescription attribute to "null". If your app supports only the devices that run Android 4.1 (API level 16) or higher, you can instead set these purely decorative elements' android:importantForAccessibility attributes to "no". Extend system widgets Key point: When you design your app's UI, use or extend system-provided widgets that are as far down Android's class hierarchy as possible. System-provided widgets that are far down the hierarchy already have most of the accessibility capabilities that your app needs. It's easier to extend these system-provided widgets than to create your own from the more generic View, ViewCompat, Canvas, and CanvasCompat classes. If you must extend View or Canvas directly, which might be necessary for a highly customized experience or a game level, see Make custom views more accessible. This section describes how to implement a special type of Switch called TriSwitch. A TriSwitch object works similarly to a Switch object, except that each instance of TriSwitch allows the user to toggle among 3 possible states. Extend from far down the class hierarchy The Switch object inherits from several framework UI classes in its hierarchy: View ↳ TextView ↳ Button ↳ CompoundButton ↳ Switch It's best for the new TriSwitch class to extend directly from the Switch class. That way, the Android accessibility framework provides most of the accessibility capabilities that the TriSwitch class needs: - Accessibility actions: Inform the system how accessibility services emulate each possible user input that's performed on a TriSwitchobject. (Inherited from View.) - Accessibility events: Inform accessibility services about every possible way that a TriSwitchobject's appearance can change when the screen refreshes or updates. (Inherited from View.) - Characteristics: Details about each TriSwitchobject, such as the contents of any text that it displays. (Inherited from TextView.) - State information: A description of a TriSwitchobject's current state, such as "checked" or "unchecked". (Inherited from CompoundButton.) - Text description of state: A text-based explanation of what each state represents. (Inherited from Switch.) This aggregate behavior, from Switch and its superclasses, is nearly the desired behavior for TriSwitch objects. Therefore, your implementation can focus on expanding the number of possible states from 2 to 3. Define custom events When you extend a system widget, you likely change an aspect of how users interact with that widget. It's important to define these interaction changes so that accessibility services can update your app's widget as if the user interacted with the widget directly. A general guideline is that, for every view-based callback that you override, you also need to redefine the corresponding accessibility action by overriding ViewCompat.replaceAccessibilityAction(). In your app's tests, you can validate the behavior of these redefined actions by calling ViewCompat.performAccessibilityAction(). How this principle would work for TriSwitch objects Unlike an ordinary Switch object, tapping a TriSwitch object cycles through 3 possible states. Therefore, the corresponding ACTION_CLICK accessibility action needs to be updated: Kotlin class TriSwitch(context: Context) : Switch(context) { // 0, 1, or 2. var currentState: Int = 0 private set init { updateAccessibilityActions() } private fun updateAccessibilityActions() { ViewCompat.replaceAccessibilityAction(this, ACTION_CLICK, action-label) { view, args -> moveToNextState() }) } private fun moveToNextState() { currentState = (currentState + 1) % 3 } } Java public class TriSwitch extends Switch { // 0, 1, or 2. private int currentState; public int getCurrentState() { return currentState; } public TriSwitch() { updateAccessibilityActions(); } private void updateAccessibilityActions() { ViewCompat.replaceAccessibilityAction(this, ACTION_CLICK, action-label, (view, args) -> moveToNextState()); } private void moveToNextState() { currentState = (currentState + 1) % 3; } } 1 shows two versions of an activity. One version uses only color to distinguish between two possible actions in a workflow. The other version uses the best practice of including shapes and text in addition to color to highlight the differences between the two options: Make media content more accessible If you're developing an app that includes media content, such as a video clip or an audio recording, try to support users with different types of accessibility needs in understanding this material. In particular, we encourage you to do the following: - Include controls that allow users to pause or stop the media, change the volume, and toggle subtitles (captions). - If a video presents information that is vital to completing a workflow, provide the same content in an alternative format, such as a transcript. Additional resources To learn more about making your app more accessible, see the following additional resources:
https://developer.android.com/guide/topics/ui/accessibility/principles?hl=he
CC-MAIN-2021-10
en
refinedweb
In keeping with programming tradition, Listing D.1 outlines our first program: "Hello, world". using System; class Hello { static void Main() { // write "Hello, world!" to the screen Console.WriteLine("Hello, world!"); } } Developers with experience in ASP/VBScript should note that the C# language is case sensitive and requires a semicolon at the end of statements like JavaScript. By following convention, we save the source code for our program with a .cs file extension, hello.cs . We can compile our program using the .NET command-line compiler, csc.exe , using the following syntax: csc hello.cs The output of this directive produces our executable program, hello.exe . The output of the program is as follows : Hello, world! From this basic example, we can glean several important points: The entry point of a C# program is a static method named Main . As mentioned before, C# does not include a class library. We are able to write to the console using the System.Console class found in the .NET platform framework classes. C, C++, and Java developers may notice that Main is not a global method. This is because C# does not support global methods and variables . These elements must be enclosed within a type declaration such as classes and structs. The Using statement symbolically handles library dependencies instead of importing source with the #include statement found in C and C++.
https://flylib.com/books/en/3.146.1.147/1/
CC-MAIN-2021-10
en
refinedweb
Important: Please read the Qt Code of Conduct - Help to set size of widget I'm trying to set a size to the widget (PasswordEdit) with setGeometry, but not works. I'm want to the PasswordEdit stay on the center... Someone can help about this? from PySide2.QtCore import (QCoreApplication, QDate, QDateTime, QMetaObject, QObject, QPoint, QRect, QSize, QTime, QUrl, Qt) from PySide2.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont, QFontDatabase, QIcon, QKeySequence, QLinearGradient, QPalette, QPainter, QPixmap, QRadialGradient) from PySide2.QtWidgets import * import sys from qtwidgets import PasswordEdit class Test(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.setMinimumSize(500, 500) password = PasswordEdit() password.setGeometry(QRect(100,100,1,100)) self.setCentralWidget(password) if __name__ == '__main__': qt = QApplication(sys.argv) app = Test() app.show() qt.exec_() - mrjj Lifetime Qt Champion last edited by Hi You can use move(x,y) (and resize) instead of setGeometry I assume that the password is not affect by any layouts as then moving it is futile :)
https://forum.qt.io/topic/115126/help-to-set-size-of-widget
CC-MAIN-2021-10
en
refinedweb
4981/splitting-a-python-string words = text.split() The split() function transforms the string into a list of strings containing the words that make up the sentence. The split() method in Python returns a list of the words in the string/line , separated by the delimiter string. This method will return one or more new strings. All substrings are returned in the list datatype. Syntax : string.split(separator, max) separator : The is a delimiter. The string splits at this specified separator. If is not provided then any white space is a separator. max : It is a number, which tells us to split the string into maximum of provided number of times. If it is not provided then there is no limit. Return: The split() breaks the string at the separator and returns a list of strings. You can use word.find('o') as well to ...READ MORE class Solution: def firstAlphabet(self, s): self.s=s k='' k=k+s[0] for i in range(len(s)): if ...READ MORE If you are talking about the length ...READ MORE How about: >>> 'hello world'[::-1] 'dlrow olleh' This is ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Here's a generator that yields the chunks ...READ MORE for c in "string": ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/4981/splitting-a-python-string
CC-MAIN-2021-10
en
refinedweb
java.util.*; import java.util.concurrent.CompletionStage; import java.util.stream.Collectors; import javax.inject.Inject; import com.fasterxml.jackson.databind.JsonNode; import play.libs.ws.WSClient; class GitHubClient { private WSClient ws; @Inject public GitHubClient(WSClient ws) { this.ws = ws; } String baseUrl = ""; public CompletionStage<List<String>> getRepositories() { return ws.url(baseUrl + "/repositories") .get() .thenApply( response -> response.asJson().findValues("full_name").stream() .map(JsonNode::asText) .collect(Collectors.toList())); } } Note that it takes the GitHub API base URL as a parameter - we’ll override this in our tests so that we can point it to our mock server. To test this, we want an embedded Play server that will implement this endpoint. We can do that by Creating an embedded server with the Routing DSL: Server server = Server.forRouter( (components) -> RoutingDsl.fromComponents(components) .GET("/repositories") .routingTo( request -> { ArrayNode repos = Json.newArray(); ObjectNode repo = Json.newObject(); repo.put("full_name", "octocat/Hello-World"); repos.add(repo); return ok(repos); }) .build()); Our server is now running on a random port, that we can access through the httpPort method. We could build the base URL to pass to the GitHubClient using this, however Play has an even simpler mechanism. The WSTestClient class provides a newClient method that takes in a port number. When requests are made using the client to relative URLs, eg to /repositories, this client will send that request to localhost on the passed in port. This means we can set a base URL on the GitHubClient to "". It also means if the client returns resources with URL links to other resources that the client then uses to make further requests, we can just ensure those a relative URLs and use them as is. So now we can create a server, WS client and GitHubClient in a @Before annotated method, and shut them down in an @After annotated method, and then we can test the client in our tests: import java.io.IOException; import java.util.*; import java.util.concurrent.TimeUnit; import com.fasterxml.jackson.databind.node.*; import org.junit.*; import play.libs.Json; import play.libs.ws.*; import play.routing.RoutingDsl; import play.server.Server; import static play.mvc.Results.*; import static org.junit.Assert.*; import static org.hamcrest.core.IsCollectionContaining.*; public class GitHubClientTest { private GitHubClient client; private WSClient ws; private Server server; @Before public void setup() { server = Server.forRouter( (components) -> RoutingDsl.fromComponents(components) .GET("/repositories") .routingTo( request -> { ArrayNode repos = Json.newArray(); ObjectNode repo = Json.newObject(); repo.put("full_name", "octocat/Hello-World"); repos.add(repo); return ok(repos); }) .build()); ws = play.test.WSTestClient.newClient(server.httpPort()); client = new GitHubClient(ws); client.baseUrl = ""; } @After public void tearDown() throws IOException { try { ws.close(); } finally { server.stop(); } } @Test public void repositories() throws Exception { List<String> repos = client.getRepositories().toCompletableFuture().get(10, TimeUnit.SECONDS); assertThat(repos, hasItem(: Server server = Server.forRouter( (components) -> RoutingDsl.fromComponents(components) .GET("/repositories") .routingTo(request -> ok().sendResource("github/repositories.json")) .build()); Note that Play will automatically set a content type of application/json due to the filename’s extension of .json. Next: Logging
https://www.playframework.com/documentation/2.8.5/JavaTestingWebServiceClients
CC-MAIN-2021-10
en
refinedweb
Implementscan);); // text import std.algorithm.comparison : equal; string text = "a,b,c\nHello,65,63.63\nWorld,123,3673.562"; auto records = text.csvReader!int(["b"]); assert(records.equal!equal([ [65], [123], ]));") ]));." © 1999–2019 The D Language Foundation Licensed under the Boost License 1.0.
https://docs.w3cub.com/d/std_csv
CC-MAIN-2021-10
en
refinedweb
python-pipedrivepython-pipedrive Python library for interacting with the pipedrive.com API This is being developed for my specific use so there's no guarantee I'll cover all of the aspects of the Pipedrive API. Feel free to add features though, I welcome pull requests. All features should be supported though as this is just a lightweight wrapper around the API. Usage: Create a Pipedrive object, passing either the api key or your username and password as the parameters from pipedrive import Pipedrive pipedrive = Pipedrive(USERNAME, PASSSWORD) or from pipedrive import Pipedrive pipedrive = Pipedrive(API_KEY) The rest of the functions relate to the URL as specified in the API Docs. Do yourself a favor and try a few simple requests and look the raw responses to know what data Pipedrive's API gives you. This will aid in knowing how to deal with your responses in python code. For example, to find an organzation: curl '' which spits out something like: {"success":true,"data":[{"id":215,"name":"Microsoft Main Organization","visible_to":"3"},{"id":360,"name":"Microsoft Subdivision Company","visible_to":"3"}],"additional_data":{"pagination":{"start":0,"limit":100,"more_items_in_collection":false}}} The two things to note are the HTTP Method, and the path: Examples: List the organizations pipedrive.organizations(method='GET') Add a New Deal pipedrive.deals({ 'title': 'Big Sucker', 'value': 1000000, 'org_id': 2045, 'status': 'open' }, method='POST') Delete an Activity pipedrive.activities({'id': 6789}, method='DELETE') Find a person, and use the search results. The variable termis the search term that has been passed in. import json ... response = pipedrive.persons_find({'term':term}, method='GET') results = response['data'] suggestions = [] if results != None: for result in results: suggestions.append({'value': result['name'], 'data': result}) json_response = {'query': term, 'suggestions': suggestions} data = json.dumps(json_response) And return datato some kind of javascript search result autocomplete thing (this example is formatted for devbridge's simple and easy-to-use jquery.autocomplete.js)
https://libraries.io/pypi/python-pipedrive
CC-MAIN-2021-10
en
refinedweb
ADSI for Beginners Firstly it's important for us to understand how ADSI works, the jargon and where all this stuff becomes useful. OK, well, ADSI is really a set of interfaces through which a provider (for example the Windows NT system) can publish its functionality. Each provider must conform to the basics of the ADSI structure, although it can offer additional features. This may sound a little confusing, so lets use a diagram to help out: This diagram (hopefully) makes the concept of namespaces and ADSI a bit clearer. Firstly, your code interacts with the ADSI structure. Through a set of common interfaces (IADsContainer etc) a variety of providers can make their data available. In this example the WinNT provider is being made available through the ADSI structure, with the data being Windows NT user information and other such details. To put these things in to a more practical application lets look at some simple but useful scripts using ADSI and the WinNT provider... Page 2 of 5 This article was originally published on November 20, 2002
https://www.developer.com/net/vb/article.php/10926_1540271_2/adsi-for-beginners.htm
CC-MAIN-2021-10
en
refinedweb
In automation testing, waiting makes the test execution pause for a certain amount of time before continuing with the next step. Selenium WebDriver provides various commands for implementing wait. In this post, we are going to discuss Webdriver Wait commands supplied by the Selenium WebDriver. However, if you want to do some serious automation projects, then you must refer these essential Selenium Webdriver commands from our blog. With every command, you’ll see a related code snippet to describe its usage in real-time situations. And to step up on the learning ladder, you should also know about the advance usage of Webdriver Wait commands. For this purpose, please read advanced Selenium Webdriver tutorials from our blog. Waits help the user to troubleshoot issues observed while re-directing to different web pages and during loading of new web elements. After refreshing the page, a time lag appears during reloading of the web pages and the web elements. Webdriver Wait commands help here to overcome the issues that occur due to the variation in time lag. Webdriver does this by checking if the element is present or visible or enabled or clickable etc. WebDriver provides two types of waits for handling the recurring page loads, invocation of pop-ups, alert messages, and the reflection of HTML objects on the web page. 1- Implicit Wait 2- Explicit Wait Apart from the above waits, you should also read about the Webdriver fluent wait from the below post. We’ve added a small demo there which uses fluent wait and ignores exceptions like <StaleElementReferenceException> or <NoSuchElementException>. Let us now discuss implicit and explicit waits in detail. Webdriver Wait Commands Tutorial With Examples. Webdriver Wait Commands – Examples. 1- Implicit WebDriver Wait Commands. Implicit waits are used to provide a default waiting time between each consecutive test step/command across the entire test script. Once you’d defined the implicit wait for X seconds, then the next step would only run after waiting for the X seconds. The drawback with implicit wait is that it increases the test script execution time. It makes each command wait for a stipulated amount of time before resuming the execution. Thus implicit wait may slow down execution of your test scripts if your application responds normally. 1.1- Import Packages. For using implicit waits in the test scripts, it’s mandatory to import the following package. import java.util.concurrent.TimeUnit; 1.2- Syntax of Implicit Wait. driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); You need to add the above line of code into the test script. It’ll set an implicit wait soon after the instantiation of WebDriver instance variable. 1.3- Example of Implicit WebDriver Wait Commands.(); } } 2- Expected Condition. It’s a Webdriver concept to play around the wait commands. With this, you can explicitly build conditional blocks to apply wait. We’ll read more about these explicit Webdriver Wait commands in the next section. wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//div[contains(text(),'COMPOSE')]"))); driver.findElement(By.xpath("//div[contains(text(),'COMPOSE')]")).click(); The above command waits either for a stipulated amount of time (defined in WebDriver Wait class) or an expected condition to occur whichever occurs first. In the given code example, we used the <wait> reference variable of <WebDriverWait> class created in the previous step. Also, added the <ExpectedConditions> class which mentions an actual condition which will most likely occur. Thus whenever the expected condition occurs, the program control will move to the next step for execution instead of forcefully waiting for the entire 30 seconds (defined wait time). 2.1- Types of Expected Conditions. This class includes several conditions which you can access with the help of a <WebDriverWait> reference variable and the <until()> method. Let’s discuss few of them below. 2.1.1- elementToBeClickable(). It can wait for an element to be clickable i.e. it should be present/displayed/visible on the screen as well as enabled. Sample Code. wait.until(ExpectedConditions.elementToBeClickable(By.xpath(“//div[contains(text(),’COMPOSE’)]”))); 2.1.2- textToBePresentInElement(). It waits for an HTML object that contains a matching string pattern. Sample Code. wait.until(ExpectedConditions.textToBePresentInElement(By.xpath(“//div[@id= ‘forgotPass'”), “text to be found”)); 2.1.3- alertIsPresent(). This one waits for an alert box to appear. Sample Code. wait.until(ExpectedConditions.alertIsPresent()) !=null); 2.1.4- titleIs(). This command waits for a page with a matching title. Sample Code. wait.until(ExpectedConditions.titleIs(“gmail”)); 2.1.5- frameToBeAvailableAndSwitchToIt(). It waits for a frame until it is available. After that frame becomes available, the control switches to it automatically. Sample Code. wait.until(ExpectedConditions.frameToBeAvailableAndSwitchToIt(By.id(“newframe”))); 3- Explicit WebDriver Wait Commands. As the name signifies, by using an explicit wait, we explicitly instruct the WebDriver to wait. By using custom coding, we can tell Selenium WebDriver to pause till the expected condition occurs. What is the need of using the explicit wait when an implicit wait is in place and doing well already? It must be the first question juggling your mind. Well, there may be instances when some elements take more time to load. Setting implicit wait in such cases may not be wise because the browser will needlessly wait for the same amount of time for every element. All of this would delay the test execution. Hence, the WebDriver brings the concept of Explicit waits to handle such elements by passing the implicit wait. This wait command gives us the ability to apply wait where required. And avoids the force waiting while executing each of the test steps. 3.1- Import Packages. For accessing explicit Webdriver Wait commands and to use them in test scripts, it is mandatory to import the following packages into the test script. import org.openqa.selenium.support.ui.ExpectedConditions import org.openqa.selenium.support.ui.WebDriverWait 3.2- Initialize a Wait object using WebDriverWait class. WebDriverWait wait = new WebDriverWait(driver,30); We created a reference variable named as <wait> for the <WebDriverWait> class. Instantiated it using the WebDriver instance. Set the maximum wait time for the execution to layoff. We measure the max wait time in “seconds”. 3.3- WebDriver Code using Explicit wait. In the following example, the given test script is about logging into “gmail.com” with a username and password. After a successful login, it waits for the “compose” button to be available on the home page. Finally, it clicks on it. package waitExample; drv; @BeforeMethod public void setup() throws InterruptedException { // initializing drv variable using FirefoxDriver drv=new FirefoxDriver(); // launching gmail.com on the browser drv.get(""); // maximized the browser window drv.manage().window().maximize(); drv.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } @Test public void test() throws InterruptedException { // saving the GUI element reference into a "element" variable of WebElement type WebElement element = drv.findElement(By.id("Email")); // entering username element.sendKeys("dummy@gmail.com"); element.sendKeys(Keys.RETURN); // entering password drv.findElement(By.id("Passwd")).sendKeys("password"); // clicking signin button drv.findElement(By.id("signIn")).click(); // explicit wait - to wait for the compose button to be click-able WebDriverWait wait = new WebDriverWait(drv,30); wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//div[contains(text(),'COMPOSE')]"))); // click on the compose button as soon as the "compose" button is visible drv.findElement(By.xpath("//div[contains(text(),'COMPOSE')]")).click(); } @AfterMethod public void teardown() { // closes all the browser windows opened by web driver drv.quit(); } } In this code, Selenium WebDriver will wait for 30 seconds before throwing a TimeoutException. If it finds the element before 30 seconds, then it will return immediately. And in the next step, it’ll click on the “Compose” button. The test script will not wait for entire 30 seconds in this case thus saving the wait time. Footnote. Hope this blog post on the most essential “Webdriver Wait Commands” would help in implementing waits efficiently in your projects. If the above concepts and the examples were able to fetch your interest, then please like this post and share it further on social media. We always believe in making tutorials that are useful for our readers. And what could be better than that you tell us what you want to see in the next blog post? If you have any queries about the post, then do write to us. All the Best, TechBeamers. Hi Meenakishi, Your post is informative. I have recently upgraded to Selenium 3.4 and all my wait.until instructions are showing error. Could you please guide me how to use explicit wait with the latest version of Selenium. Hi Meenakshi, Thanks for the great explanations. I have 2 queries. 1) Though I set explicit wait and until function, I still lots of time getting stalereference exception. I could not avoid it completely. I have searched google many a times but none are giving fixed answer to this problem. Do you have any suggestions? 2) Implicit wait never works for me.. No idea though !! I am using thread.sleep sometimes… Thanks Hi, Senthil – Please read my answers to your two queries. And ask again if your problem is not resolved yet. #1 – I would suggest using Fluent wait for your problem. Also, create a wait object with ignoring exceptions and pass it to the wait/until function. You can check out our post on Fluent wait command which has a sample Web page and a demo program to show how to use Fluent wait. You can reuse the sample code in your project and it should fix the issue you are facing. #2 – The implicit wait is better than using the Thread.sleep() method. But to work it correctly, you should define it once after creating the webdriver instance and assign an optimum value, say 30. And then, you should use the same value throughout the program. However, I can say that your problem is completely solvable even without using any wait command. We’ve done it in one of our posts on the data-driven framework using Selenium 3. If you intend to refer this post, then look for the function named as “waitForPageLoaded”. It waits for the page to load by pooling for an element inside a loop. The element will be available after the page load, until then, it’ll keep failing and retrying inside the loop. Let me know if you have any more questions.
http://www.techbeamers.com/webdriver-wait-commands-tutorial-examples/
CC-MAIN-2018-26
en
refinedweb
Website integration sabre jobs. ...any confusion caused. I am looking for someone to design and a develop a modern easy to use website for an Airline. Our proposed GDS are: Amadeus, Sabre and Videcom. These are the platforms that we will use for ticket reservations and check-in. The website details are as follows ... *Functions* - Flight Booking, Car Rental & Hotel Reservation (Search need amadeus, and sabre API.s ,and a data base(my sql) . I have a software that pulls information from Sabre's Developer API (SOAP). Sabre is discontinuing the API and I need it switched over to the new one. Vender algo por mí Para vender necesitamos una buena demostración del producto a ofrecer. Para ello sabre los beneficios y cuál es la calidad de ello. Llegar al cliente .... ofreciéndole una garantizada. Compra e ideal a lo q puede buscar.] Need someone who is familiar with Sabre or another GDS system. You will also need to integrate with other APIs including (but not limited to): -Google maps -Opentable -Facebook/Twitter/Instagram And design a solid UX to test a market for a product. Please be able to demonstrate travel knowledge. I need a responsive web site for a travel agency Sabre scraping as discussed. I want to program a website, to buy airline tickets and booking them through reservation systems eg Amadeus and SABRE , please have experience in that communication Hello! I am looking for a fluent Sabre Red Workspace user to please help me with training from basics to full service flight bookings. While I do not require much attention in detail to hotel, rail, or car bookings, I am a travel industry agent moving into the Sabre services for the first time. I need mentoring on how to proactively use the software I need a responsive website. I would like it designed and built. I need an app that can show our schedule so I can trade with my colleague the company We are working for is using Sabre. As discussed, Sabre's implementation update ** TO BID ON THIS PROJECT, YOU MUST HAVE EXPERIENCE WITH AMADEUS, TRAVELPORT, SABRE ** Airline Booking Engine This application will be web based and allow our users to make bookings on the GDS via their API's. We are looking for coders who have a general understanding of airline booking systems. I need to create a. -) Creation of Online Travel Agency (OTA) website/system ~Search box (widget) that has interface to search for hotel, flights, car rental. This must be ajax driven and insertable via remote call of javascript code. The javascript code must have unique namespace. ~Website template for search results ~Search results display that is optimized for the Help the dev team navigate Sabre SOAP APIs to do a flight search through to flight booking along with SSRs. Help navigate the data schema. ...not like any images from the films it has to be entirely game based please, unless you can make images from the films look similar to the game ie General grievous ect. Light Sabre Characters like Darth Maul, Rey, Kylo Ren Luke ect are high priority as these catch the eye and add color to the design. In regards to the Logo, I would be open to completely We are a small travel agency and we want to develop our website and integrate an extension for 1- Flight reservation 2- Hotel reservation our website was built by joomla the extension should integrate with Sabre and Expedia GDS ...looking for an experience engineer to work with the existing developers to work on B2C and B2B Travel Booking Portals. You must have experience in working with below API's Sabre, Travelport, Expedia and Payments Gateways You must have minimum 2 years experience in working in Travel booking portals and in ASP.NET For immediate start We're a travel company looking to outsource some of the commission application for our 700+ travel agent network. Skills required...company looking to outsource some of the commission application for our 700+ travel agent network. Skills required knowledge of Microsoft Excel, Google Docs and Trams database (Sabre product) for commission application. We require consulting services in order to guide us in which Sabre Synxis webservices to use, and how to use them, in order to obtain some information on booking reservations. Experienced developer needed to install API's from Sabre network onto my website [login to view URL] Implement Sabre booking API (Flights & Hotel) into WooCommerce product. [login to view URL] (API) ...banner setting - Booking engine business hours setting - Online booking reports II. Online Booking Module - Search & Book for Services o Flight with Sabre PNR creation o Hotel & Dynamic Package booking creation - Responsive user interface with mobile compatibility - Multi-language Support (Chinese, Simplified Chinese Connect a hotel online travel booking website to its PMS on Oracle Hospitality using a channel manager Sabre API. Need excellence understanding of XML, SOAP API and Oracle database. Knowledge of Node.js, [login to view URL], Stripe Payment and Server management is a must. ..). This service is specifically Hi, I have a Sabre Flight Portal and need a wordpress developer to integrate it in WordPress. Please quote the final price and time required. Thanks. ...sub-domains / domains for end users and agents We are Travel agency, we need online booking like travelocity, priceline, etc. We have Sabre as GDS. We have website before eflydirect.com. We will like to add some features into it like home away, airb&b. We hope the website will be user friendly. need to implement Sabre Air Availability SOAP API. Hi there, I am looking for someone who can design nice website for me and also write travel software for me preferably AMADEUS or Sabre based. The ideal website should be one like [login to view URL] or [login to view URL] or [login to view URL] or [login to view URL] with back office booking functionality. Many thanks, Mehul Looking for an advanced sabre expert to teach me how to break married segments in the GDS Call GDS Sabre API for Flight search and create database for results
https://www.freelancer.com/job-search/website-integration-sabre/
CC-MAIN-2018-26
en
refinedweb
#include <NodeQuadCFInterp2.H> Class to interpolate quadratically at interface between this level and next coarser level, when the refinement ratio is 2. This class should be considered internal to NodeQuadCFInterp. Long Description: The interface has codimension one. If a fine node coincides with a coarse node, then we merely project from the coarse node to the fine node. Otherwise, we take the mean of the neighboring coarse nodes and subtract 1/8 * dx^2 times the sum of the second derivatives in the appropriate directions. The interpolation is performed in function coarseFineInterp(). The constructor computes m_loCFIVS and m_hiCFIVS to determine the fine nodes at the interface with the coarse level. The constructor also takes m_loCFIVS and m_hiCFIVS to determine the fine nodes at the interface with the coarse level. Calling getFineIVS() on m_loCFIVS[idir][dit()] gives us the IntVectSet of nodes of m_grids[dit()] on the face in the low direction in dimension idir, that lie on the interface with the coarser level. Similarly with m_hiCFIVS[idir][dit()] for the opposite face, in the high direction in dimension idir. 2-D Description: In the 2-D problem, the interface is 1-D. Between coarse nodes at 0 and 1, we approximate the value at the fine node by f(1/2) ~ (f(0) + f(1))/2 - 1/8 * f''(1/2) where we estimate the second derivative f''(1/2) from the coarse values f(0), f(1), and either f(-1) or f(2) or both. If the points -1 and 2 are both on the grid: o---o-x-o---o -1 0 1 2then we use If the point -1 is on the grid but 2 is not: o---o-x-o o -1 0 1 2then we approximate f''(1/2) by f''(0) and use If the point 2 is on the grid but -1 is not: o o-x-o---o -1 0 1 2then we approximate f''(1/2) by f''(1) and use 3-D Description: In the 3-D problem, the interface is 2-D. For any given fine node along the interface, look at its three coordinates. At least one of the coordinates is divisible by 2, meaning that it lies on a plane of coarse nodes. If all of the coordinates are even, then the fine node coincides with a coarse node, and we project the value. If only one coordinate is odd, then this reduces to the problem of a 1-D interface described above in the 2-D case. We are left with the problem of interpolating f(1/2, 1/2). (0,1) (1,1) o-------o | | | x | | | o-------o (0,0) (1,0)We use In particular, d^2 f/dx^2 (1/2,1/2) is approximated by the mean of d^2 f/dx^2 (1/2, 0) and d^2 f/dx^2 (1/2, 1). These second derivatives are estimated in the same way as described above for the 1-D interface in the 2-D case. Destructor. Full define function. Makes all coarse-fine information and sets internal variables. The current level is taken to be the fine level. Full define function. Makes all coarse-fine information and sets internal variables. The current level is taken to be the fine level. Set whether to give output. Default is false. Coarse / Fine (inhomogeneous) interpolation operator. Fill the nodes of a_phi on the coarse/fine interface with interpolated data from a_phiCoarse. Interpolate from a_psiFab to a_fineFab at a_iv along a line. CELL-centered grids at the current level (the finer level) CELL-centered physical domain of this level CELL-centered m_grids coarsened by 2 copy of coarse phi, used in CFInterp(). interpolating from interface only? degree of interpolation: 1 for (bi)linear, 2 for (bi)quadratic number of components of data, needed for setting size of work array has full define function been called? pointer to object storing coarse/fine interface nodes between this level and next coarser level pointer to object storing coarse/fine interface nodes between this level and next coarser level if true, print out extra information
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.1/classNodeQuadCFInterp2.html
CC-MAIN-2018-26
en
refinedweb
In case you need to get or modify data in Dynamics 365 while calling from Azure Logic App I would recommend to use an action of Http type. This action provides you to request Dynamics 365 endpoint using most of relevant verbs (methods), among each other: GET, POST, PATCH, DELETE along with authentication header. This is mandatory to bind your 365 user prior to consume Dynamics 365 Web API. Please refer to my previous blog where explained how to do it. Dynamics 365 exposes RESTFul Web API with a plenty of features even fairly complex. Dynamics 365 Web API implements OData protocol hence you need to consider it while constructing requests. Below you can find the example how the Azure Logic App can be implemented. It accepts a CityName as an input parameter and is combined in the query sent to Web API resulted with accounts with a city of specified. Azure Logic App is getting accounts from specified city and a source code: Logic App source code Please note the authentication is critical and you must provision Azure AD Application along with the binding prior to implementing the Logic App. To obtain values retrieved by the Http action you need to use a property named 'value' which is an array. Logic App has a variety of string, conversion functions which would help you in parsing retrieved data. Please refer to the link . Calling Dynamics 365 in custom API There is lightweight approach to consume Dynamics 365 over Application User. The instance which you can further leverage is OrganizationWebProxyClient. Class signature: public class OrganizationWebProxyClient : WebProxyClient, IOrganizationService First of all you need to aquuire the token from Azure AD using for instance ADAL library. var authenticationContext = new AuthenticationContext(authority); var clientCredentials = new ClientCredential(clientId, clientSecret); AuthenticationResult result = authenticationContext.AcquireTokenAsync(organizationUrl, clientCredentials).Result; In case you are going to use early binding you would need to inject your type assembly. It is not enough to set the useStrongTypes argument to True in the constructor but load the assembly: public OrganizationWebProxyClient(Uri serviceUrl, Assembly strongTypeAssembly); The URL must contain the SDK client version. The client version is set to the version of the SDK assemblies that the client application was linked to. For example: Please refer to organization service endpoint What you need to provide: - clientId = Azure AD Application ID entered to Application ID attribute in your Dynamics Application User - clientSecret = Secret key generated on above Azure AD Application - authority ={tenantId}/ {TenantId} - DirectoryID of your Azure AD - organizationUrl = Dynamics 365 instance i.e. - organizationEndpoint = Dynamics 365 instance including organization service endpoint, i.e. Finaly, you can return an instance: return new OrganizationWebProxyClient(organizationEndpoint, Assembly.Load({assembly name}))
https://blogs.msdn.microsoft.com/mchmielu/2017/10/06/call-dynamics-crm-from-azure-app-service/
CC-MAIN-2018-26
en
refinedweb
Local's improvements to Web Workers open doors to new kinds of web apps Paul Frazee's side project, Local, is not just a library for getting better utility out of Web Workers. It is that, of course, but it's also a different way to think about web apps. Local allows servers to run in browser threads where they host HTML and act as proxies between the main thread and remote services. The server threads run in their own Web Worker namespaces and communicate via Local's implementation of HTTP over the postMessage API. This architecture presents many interesting opportunities for new kinds of web apps. Want to learn more? Paul wrote an article outlining four potential use cases that Local enables. Give it a read, check out the docs (which are built using Local), or view the project's source code on GitHub.
https://changelog.com/news/locals-improvements-to-web-workers-open-doors-to-new-kinds-of-web-apps-XrV
CC-MAIN-2018-26
en
refinedweb
To. Here is a breakdown of the project we will be tackling: - A masonry grid of images, shown as collections. The collector, and a description, is attributed to each image. This is what a masonry grid looks like: - An offline app showing the grid of images. The app will be built with Vue, a fast JavaScript framework for small- and large-scale apps. - Because PWA images need to be effectively optimized to enhance smooth user experience, we will store and deliver them via Cloudinary, an end-to-end media management service. - Native app-like behavior when launched on supported mobile browsers. Let's get right to it! Setting up Vue with PWA Features A service worker is a background worker that runs independently in the browser. It doesn't make use of the main thread during execution. In fact, it's unaware of the DOM. Just JavaScript. Utilizing the service worker simplifies the process of making an app run offline. Even though setting it up is simple, things can go really bad when it’s not done right. For this reason, a lot of community-driven utility tools exist to help scaffold a service worker with all the recommended configurations. Vue is not an exception. Vue CLI has a community template that comes configured with a service worker. To create a new Vue app with this template, make sure you have the Vue CLI installed: npm install -g vue-cli Then run the following to initialize an app: vue init pwa offline-gallery The major difference is in the build/webpack.prod.conf.js file. Here is what one of the plugins configuration looks like: // service worker caching new SWPrecacheWebpackPlugin({ cacheId: 'my-vue-app', filename: 'service-worker.js', staticFileGlobs: ['dist/**/*.{js,html,css}'], minify: true, stripPrefix: 'dist/' }) The plugin generates a service worker file when we run the build command. The generated service worker caches all the files that match the glob expression in staticFileGlobs. As you can see, it is matching all the files in the dist folder. This folder is also generated after running the build command. We will see it in action after building the example app. Masonry Card Component Each of the cards will have an image, the image collector and the image description. Create a src/components/Card.vue file with the following template: <template> <div class="card"> <div class="card-content"> <img : <h4>{{collection.collector}}</h4> <p>{{collection.description}}</p> </div> </div> </template> The card expects a collection property from whatever parent it will have in the near future. To indicate that, add a Vue object with the props property: <template> ... </template> <script> export default { props: ['collection'], name: 'card' } </script> Then add a basic style to make the card pretty, with some hover animations: <template> ... </template> <script> ... </script> <style> .card { background: #F5F5F5; padding: 10px; margin: 0 0 1em; width: 100%; cursor: pointer; transition: all 100ms ease-in-out; } .card:hover { transform: translateY(-0.5em); background: #EBEBEB; } img { display: block; width: 100%; } </style> Rendering Cards with Images Stored in Cloudinary Cloudinary is a web service that provides an end-to-end solution for managing media. Storage, delivery, transformation, optimization and more are all provided as one service by Cloudinary. Cloudinary provides an upload API and widget. But I already have some cool images stored on my Cloudinary server, so we can focus on delivering, transforming and optimizing them. Create an array of JSON data in src/db.json with the content found here. This is a truncated version of the file: [ { "imageId": "jorge-vasconez-364878_me6ao9", "collector": "John Brian", "description": "Yikes invaluably thorough hello more some that neglectfully on badger crud inside mallard thus crud wildebeest pending much because therefore hippopotamus disbanded much." }, { "imageId": "wynand-van-poortvliet-364366_gsvyby", "collector": "Nnaemeka Ogbonnaya", "description": "Inimically kookaburra furrowed impala jeering porcupine flaunting across following raccoon that woolly less gosh weirdly more fiendishly ahead magnificent calmly manta wow racy brought rabbit otter quiet wretched less brusquely wow inflexible abandoned jeepers." }, { "imageId": "josef-reckziegel-361544_qwxzuw", "collector": "Ola Oluwa", "description": "A together cowered the spacious much darn sorely punctiliously hence much less belched goodness however poutingly wow darn fed thought stretched this affectingly more outside waved mad ostrich erect however cuckoo thought." }, ... ] The imageId field is the public_id of the image as assigned by the Cloudinary server, while collector and description are some random name and text respectively. Next, import this data and consume it in your src/App.vue file: import data from './db.json'; export default { name: 'app', data() { return { collections: [] } }, created() { this.collections = data.map(this.transform); } } We added a property collections and we set it's value to the JSON data. We are calling a transform method on each of the items in the array using the map method. Delivering and Transforming with Cloudinary You can't display an image using it's Cloudinary ID. We need to give Cloudinary the ID so it can generate a valid URL for us. First, install Cloudinary: npm install --save cloudinary-core Import the SDK and configure it with your cloud name (as seen on Cloudinary dashboard): import data from './db.json'; export default { name: 'app', data() { return { cloudinary: null, collections: [] } }, created() { this.cloudinary = cloudinary.Cloudinary.new({ cloud_name: 'christekh' }) this.collections = data.map(this.transform); } } The new method creates a Cloudinary instance that you can use to deliver and transform images. The url and image method takes the image public ID and returns a URL to the image or the URL in an image tag respectively: import cloudinary from 'cloudinary-core'; import data from './db.json'; import Card from './components/Card'; export default { name: 'app', data() { return { cloudinary: null, collections: [] } }, created() { this.cloudinary = cloudinary.Cloudinary.new({ cloud_name: 'christekh' }) this.collections = data.map(this.transform); }, methods: { transform(collection) { const imageUrl = this.cloudinary.url(collection.imageId}); return Object.assign(collection, { imageUrl }); } } } The transform method adds an imageUrl property to each of the image collections. The property is set to the URL received from the url method. The images will be returned as is. No reduction in dimension or size. We need to use the Cloudinary transformation feature to customize the image: methods: { transform(collection) { const imageUrl = this.cloudinary.url(collection.imageId, { width: 300, crop: "fit" }); return Object.assign(collection, { imageUrl }); } }, The url and image method takes a second argument, as seen above. This argument is an object and it is where you can customize your image properties and looks. To display the cards in the browser, import the card component, declare it as a component in the Vue object, then add it to the template: <template> <div id="app"> <header> <span>Offline Masonary Gallery</span> </header> <main> <div class="wrapper"> <div class="cards"> <card v-</card> </div> </div> </main> </div> </template> <script> ... import Card from './components/Card'; export default { name: 'app', data() { ... }, created() { ... }, methods: { ... }, components: { Card } } </script> We iterate over each card and list all the cards in the .cards element. Right now we just have a boring single column grid. Let's write some simple masonry styles. Masonry Grid To achieve the masonry grid, you need to add styles to both cards (parent) and card (child). Adding column-count and column-gap properties to the parent kicks things up: .cards { column-count: 1; column-gap: 1em; } We are close. Notice how the top cards seem cut off. Just adding inline-block to the display property of the child element fixes this: card { display: inline-block } If you consider adding animations to the cards, be careful as you will experience flickers while using the transform property. Assuming you have this simple transition on .cards: .card { transition: all 100ms ease-in-out; } .card:hover { transform: translateY(-0.5em); background: #EBEBEB; } Setting perspective and backface-visibilty to the element fixes that: .card { -webkit-perspective: 1000; -webkit-backface-visibility: hidden; transition: all 100ms ease-in-out; } You also can account for screen sizes and make the grids responsive: @media only screen and (min-width: 500px) { .cards { column-count: 2; } } @media only screen and (min-width: 700px) { .cards { column-count: 3; } } @media only screen and (min-width: 900px) { .cards { column-count: 4; } } @media only screen and (min-width: 1100px) { .cards { column-count: 5; } } Optimizing Images Cloudinary is already doing a great job by optimizing the size of the images after scaling them. You can optimize these images further, without losing quality while making your app much faster. Set the quality property to auto while transforming the images. Cloudinary will find a perfect balance of size and quality for your app: transform(collection) { const imageUrl = // Optimize this.cloudinary.url(collection.imageId, { width: 300, crop: "fit", quality: 'auto' }); return Object.assign(collection, { imageUrl }); } This is a picture showing the impact: The first image was optimized from 31kb to 8kb, the second from 16kb to 6kb, and so on. Almost 1/4 of the initial size; about 75 percent. That's a huge gain. Another screenshot of the app shows no loss in the quality of the images: Making the App Work Offline This is the most interesting aspect of this tutorial. Right now if we were to deploy, then go offline, we would get an error message. If you're using Chrome, you will see the popular dinosaur game. Remember we already have service worker configured. Now all we need to do is to generate the service worker file when we run the build command. To do so, run the following in your terminal: npm run build Next, serve the generated build file (found in the the dist folder). There are lots of options for serving files on localhost, but my favorite still remains serve: # install serve npm install -g serve # serve serve dist This will launch the app on localhost at port 5000. You would still see the page running as before. Open the developer tool, click the Application tab and select Service Workers. You should see a registered service worker: The huge red box highlights the status of the registered service worker. As you can see, the status shows it's active. Now let's attempt going offline by clicking the check box in small red box. Reload the page and you should see our app runs offline: The app runs, but the images are gone. Don't panic, there is a reasonable explanation for that. Take another look at the service worker config: new SWPrecacheWebpackPlugin({ cacheId: 'my-vue-app', filename: 'service-worker.js', staticFileGlobs: ['dist/**/*.{js,html,css}'], minify: true, stripPrefix: 'dist/' }) staticFileGlobs property is an array of local files we need to cache and we didn't tell the service worker to cache remote images from Cloudinary. To cache remotely stored assets and resources, you need to make use of a different property called runtimeCaching. It's an array and takes an object that contains the URL pattern to be cached, as well as the caching strategy: new SWPrecacheWebpackPlugin({ cacheId: 'my-vue-app', filename: 'service-worker.js', staticFileGlobs: ['dist/**/*.{js,html,css}'], runtimeCaching: [ { urlPattern: /^https:\/\/res\.cloudinary\.com\//, handler: 'cacheFirst' } ], minify: true, stripPrefix: 'dist/' }) Notice the URL pattern, we are using https rather than http. Service workers, for security reasons, only work with HTTPS, with localhost as exception. Therefore, make sure all your assets and resources are served over HTTPS. Cloudinary by default serves images over HTTP, so we need to update our transformation so it serves over HTTPS: const imageUrl = this.cloudinary.url(collection.imageId, { width: 300, crop: "fit", quality: 'auto', secure: true }); Setting the secure property to true does the trick. Now we can rebuild the app again, then try serving offline: # Build npm run build # Serve serve dist Unregister the service worker from the developer tool, go offline, the reload. Now you have an offline app: You can launch the app on your phone, activate airplane mode, reload the page and see the app running offline. Conclusion When your app is optimized and caters for users experiencing poor connectivity or no internet access, there is a high tendency of retaining users because you're keeping them engaged at all times. This is what PWA does for you. Keep in mind that a PWS must be characterized with optimized contents. Cloudinary takes care of that for you, as we saw in the article. You can create a free account to get started. This post originally appeared on VueJS Developers
https://cloudinary.com/blog/offline_first_masonry_grid_showcase_with_vue
CC-MAIN-2018-26
en
refinedweb
I have a project using a CY8C4024AZI-S403 device. I have running code but wanted to create a simple timer ISR that can perform things like scan for key presses etc. I've used a GlobalSignal before on a PSoC4 BLE so I added one called GlobalSignal_2 with an attached isr_1. In the configuration window, GlobalSignal_2 uses Watch Dog Timer Interrupt (WDTInt). I then went into the LFClk setting and enabled the Timer (WDT) as a free running timer (see attachment), added code to main to start the isr - isr_1_start(); -and added code to the isr_1.c file as follows: CY_ISR(isr_1_Interrupt) { #ifdef isr_1_INTERRUPT_INTERRUPT_CALLBACK isr_1_Interrupt_InterruptCallback(); #endif /* isr_1_INTERRUPT_INTERRUPT_CALLBACK */ /* Place your Interrupt code here. */ /* `#START isr_1_Interrupt` */ CySysWdtClearInterrupt(); tedsCount++; /* `#END` */ } It compiles and runs but doesn't seem to ever call the ISR as tedscount never increments. If I change the Timer (WDT) to mode Watchdog (w/interrupt) my code seem to get stuck in a doom loop - not sure what's happening Any ideas what's wrong? Does the WDT clock is enabled by default or you must start it on your main function? did you enabled it? EDIT: Just found this blog post about psoc4 and WDT, it might help you. PSoC4 Watch Dog Timer - IoT Expert
https://community.cypress.com/thread/31625
CC-MAIN-2018-26
en
refinedweb
Load Values into an Android Spinner This Android Spinner example takes a look at loading string items into the Spinner. The demo code provided is an Android Studio Spinner example project. The Spinner View is useful when you want the user to pick an item from a predetermined list, but do not want to take up a lot of screen space (such as using several radio buttons). Programmers moving to Android from other environments will notice a difference in terminology. The Android Spinner behaves in a similar fashion to what some may call a drop down list. A Spinner on other platforms is closer to the Android Pickers, which are often seen when setting dates and times on an Android device. (This Android loading Spinner.) The Android UI Pattern Programmers coding with the Android SDK soon come across a familiar pattern when designing the user interface. There are the Views that make up the screens (managed by Activites). There is data that needs to be displayed in those Views. Finally there is the code that links the Views to the data. For some Views and types of data the code that links them together is provided by an Adapter. In this example the data is an array of strings, the View is the Spinner, and an ArrayAdapter is the link between the two. Create a New Studio Project Create a new project in Android Studio, here called Spinner Demo. An Empty Activity is used with other settings left at their default values. Add the Data The array of strings is defined in a values file in the res/values folder. Use the Project explorer to open the file strings.xml. Enter the values for the Spinner into a string array. Here a list of types of coffee is going to be used. Here is an example strings.xml with a string array called coffeeType: <resources> <string name="app_name">Spinner Demo</string> <string-array <item>Filter</item> <item>Americano</item> <item>Latte</item> <item>Espresso</item> <item>Cappucino</item> <item>Mocha</item> <item>Skinny Latte</item> <item>Espresso Corretto</item> </string-array> <string name="coffePrompt">Choose Coffe</string> </resources> Add the Spinner The Spinner is added to the activity_main.xml layout file (in the res/layout folder). Open the layout file and delete the default Hello World! TextView. From the Palette drag and drop and Spinner onto the layout. Set Spinner ID to chooseCoffee (if dropping on a ConstraintLayout also set the required constraints). layout_width and layout_height are both set to wrap_content. The activity_main.xml file will be similar to this: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <Spinner android: </android.support.constraint.ConstraintLayout> Add the Code to Load the Spinner The Spinner added to the layout file is the basic framework, a layout will also be required to hold the data values in the collapsed and expanded state. Fortunately for simple uses Android includes default layouts. To connect the array to the Spinner an ArrayAdapter is created. The ArrayAdapter class has a static method that can take existing suitable resources and use them to create an ArrayAdapter instance. The method createFromResource() takes the current Context, the resource id of the string array and the resource id of a layout that will be used to display the selected array item when the Spinner is collapased (by default this layout is repeated to show the list of items in the expanded state). A layout for the data item has not been defined instead an Android default simple_spinner_item layout is used. Here is the code for the MainActivity Java class: package com.example.spinnerdemo; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.widget.ArrayAdapter; import android.widget.Spinner; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //an adapter is linked to array and a layout ArrayAdapter adapter = ArrayAdapter.createFromResource( this, R.array.coffeeType, android.R.layout.simple_spinner_item); //link the adapter to the spinner Spinner coffeeChoice = (Spinner) findViewById(R.id.chooseCoffee); coffeeChoice.setAdapter(adapter); } } Run the project to see the Spinner in action. The Spinner supports a dialog style (see the first graphic at the top of the tutorial). To see it first set the Spinner prompt property to the Choose Coffee string (@string/coffePrompt). Then change the spinnerMode property to dialog. The loading Spinner source code is available in loading_spinner.zip or from the Android Example Projects page. See Also - See the other Android Studio example projects to learn Android app programming. Archived Comments Isuru on March 8, 2013 at 5:38 am said: Thanks loads. It helps alot. Pawan on May 6, 2013 at 12:56 pm said: Very Nice tutorial. Author:Daniel S. Fowler Published: Updated:
https://tekeye.uk/android/examples/ui/load-values-into-an-android-spinner
CC-MAIN-2018-26
en
refinedweb
#include <EBFineToCoarRedist.H> Default constructor. Leaves object undefined. Modify the weights in the stencil by multiplying by the inputs in a normalized way. If you want mass weighting, send in the density on the coarse layout. uglier but general define fcn Full define function. Define the stencils with volume weights. If you want mass weights or whatever, use reset weights. potentially faster define function, especially with large numbers of boxes. Initialize values of registers to zero. Increments the register with data from coarseMass. This is the full redistribution mass. Internally the class figures out what actually goes to the coarse level. Redistribute the data contained in the internal buffers.
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.1/classEBFineToCoarRedist.html
CC-MAIN-2018-26
en
refinedweb
Sorry, lot of questions today I try this plugin: class TestOracleCommand(sublime_plugin.TextCommand): def run(self, edit, myparam): print myparam And call it with: >>> view.run_command("test_oracle", {"myparam":[1,2]}) [1.0, 2.0] >>> view.run_command("test_oracle", {"myparam":(1,2)}) None >>> view.run_command("test_oracle", {"myparam":sublime.Region(0)}) None As we can see, some type of parameter doesn't pass to the command.Is it a bug or a limitation ? Only types that have a direct JSON representation can be passed: lists, dictionaries (with string keys), numbers, bools, and strings. OK, thanks for the quick answer. Could I replace: self.window.run_command("oracle_exec", {"dsn": dsn, "entities": entities}) with: oracle_exec.OracleExecCommand(self.window).run(dsn=dsn, entities=entities) It look like it work but do you see any issue with this ? My other option is to convert the unsupported type in my argument to something supported and convert it back later. I'd recommend converting the arguments. Where possible, commands should purely depend on their arguments, and not data passed in other ways: this allows them to work properly in key bindings, menus, macros, and be repeated. Because this isn't a TextCommand, macros aren't an argument, but you may want to bind a key to the command in the future.
https://forum.sublimetext.com/t/command-argument-type-limitation/1864
CC-MAIN-2016-22
en
refinedweb
> Maybe Scott is interested in the sitemap generation stylesheet? I want to stress that I'm very interested in doing whatever it takes over the next couple of weeks to get Xalan 2 running well in Cocoon 2. You should write a note to me directly for any issues. You might try prepending something like [xalan2-cocoon2] to email subject lines to make sure I don't miss the mail. I will try and be extra responsive. As I said in the other note, sitemap.xsl should be working by tomorrow, once I address the extensions issue. -scott Giacomo Pati <Giacomo.Pati To: cocoon-dev@xml.apache.org @pwr.ch> cc: (bcc: Scott Boag/CAM/Lotus) Sent by: Subject: Re: C2 status giacomo@lotus .com 07/29/2000 03:41 PM Please respond to cocoon-dev "N. Sean Timm" wrote: > > "Giacomo Pati" <Giacomo.Pati@pwr.ch> wrote: > > Sean Timm will try to get Xalan 2.0 working with this release (trax). I > > personaly had no luck with Xalan 2.0 transforming a sitemap to java code > > using the stylesheet I've written > > > (src/org/apache/cocoon/components/language/markup/sitemap/java/sitemap.xsl). > > Yes...unfortunately, the current version of Xalan 2 doesn't seem to work at > all converting that sitemap, so I can't test any of my code changes until > that starts working. I fired a message off to Scott to see if I could get > some assistance on the Xalan end of things. I believe the changes I've made > are correct, however, so it should be good to go as soon as we can get Xalan > usable. I haven't created the TraxTransformer yet, but I've modified the > necessary files to make Cocoon2 use (and compile with) Xalan 2. DOMUtils > has been modified to utilize JAXP and TRaX, so there isn't a hard-coded > dependency on Xerces and Xalan anymore. (Although Xalan is currently the > only TRaX implementation that I know of...) To make C2 using Xalan 2 at all the sitemap generation stylesheet should be the test case for it. It uses the java namespace for XSLT extensions and this must be working. Until then C2 will remain on Xalan 1.x because I think we can't mix these Xalan versions. Maybe Scott is interested in the sitemap generation styles:
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200007.mbox/%3COF8F6FB38C.EDB20A7D-ON8525692C.007B462D@lotus.com%3E
CC-MAIN-2016-22
en
refinedweb
-----Original Message----- From: Ryan Schmidt [mailto:subversion-2009d_at_ryandesign.com] Sent: Wednesday, December 16, 2009 4:48 PM To: DEVELA Brent Cc: 'users_at_subversion.apache.org' Subject: Re: Permission Denied Error on Pre-commit Java hook On Dec 15, 2009, at 23:35, DEVELA Brent wrote: > Ryan Schmidt wrote: > >> On Dec 15, 2009, at 04:14, DEVELA Brent wrote: >> >>>. >> >> How is your repository served -- via apache? or svnserve? As what user is that process running? Does that user have permission to write to the place where your jar is creating its file? >> > Thanks for the reply, My repository is served via apache and the user running it is www-data. And yes, the user does have rights in the folder. Here's the code I'm trying to run with the contents of the output.txt file from the python code. > > Python code: > > import os > output = os.popen(log_cmd, 'r').read() > ofile = open('/tmp/output.txt','w') > ofile.write(output) > ofile.close() > > JAVA code: > > package integrationtestscript; > > import com.ibatis.common.jdbc.ScriptRunner; > import java.io.*; > import java.sql.SQLException; > import com.mysql.jdbc.ConnectionImpl; > > public class App { > public static void main(String[] args) { > > try { > System.out.println("Hello World!"); > > File f; > f=new File("/tmp/myfile.txt"); > if(!f.exists()){ > f.createNewFile(); > System.out.println("New file \"myfile.txt\" has been created to the current directory"); > } > > System.out.println("Exit"); > System.exit(0); > } catch (Exception ex) { > System.out.println(ex.getMessage()); > } > } > } > > Contents of /tmp/Output.txt after execution as a pre-commit hook. > > Hello World! > Permission denied So the Python hook script can run, can call the Java code, and can write its output to /tmp/output.txt. But the Java code cannot create /tmp/myfile.txt. Does /tmp/myfile.txt already exist, and if so, are its permissions and ownership such that www-data can write to it? Another possibility: is SELinux enabled? If so, you may need to configure additional things. Ryan, Your solution worked. I changed the ownership of the file. It's running smoothly now. I did not have to configure SELinux because it was not installed in the first place. Thank you very much! Thanks, Brent Received on 2009-12-16 10:17:46 CET This is an archived mail posted to the Subversion Users mailing list.
http://svn.haxx.se/users/archive-2009-12/0302.shtml
CC-MAIN-2016-22
en
refinedweb
module Sound.Analysis.Meapsoft.Header ( Feature(..) , read_header , find_feature , required_feature , has_feature ) where import Control.Monad import Data.List import System.IO import Text.ParserCombinators.Parsec -- | Data type representing a MEAPsoft analysis feature. The -- 'feature_column' is the integer column index for the feature in -- the analysis data. The 'feature_degree' is the number of columns -- the feature requires. data Feature = Feature { feature_name :: String , feature_column :: Int , feature_degree :: Int } deriving (Show) -- | Read the header of a MEAPsoft analysis file and extract the list -- of stored features. read_header :: FilePath -> IO (Either String [Feature]) read_header fn = do s <- read_header_string fn let r = parse_header fn s return (case r of (Right h) -> Right (mk_features (normalize_header h)) (Left e) -> Left (show e)) -- | Search for a named feature. find_feature :: String -> [Feature] -> Maybe Feature find_feature n = find (\x -> feature_name x == n) -- | A variant of 'find_feature' that raises an error if the feature -- is not located. All analysis files have the features onset_time -- and chunk_length. required_feature :: String -> [Feature] -> Feature required_feature n fs = maybe (error n) id (find_feature n fs) -- | True iff the analysis data contains the named feature. has_feature :: String -> [Feature] -> Bool has_feature n fs = maybe False (const True) (find_feature n fs) -- Parsec parser for header string. type P a = GenParser Char () a word :: P String word = many1 (letter <|> oneOf "_") <?> "word" whitespace :: P String whitespace = many1 (oneOf " \t") in_paren :: P a -> P a in_paren p = do { char '(' ; r <- p ; char ')' ; return r } int :: P Int int = liftM read (optional (char '-') >> many1 digit) feature :: P (String, Int) feature = do { f <- word ; n <- optionMaybe (try (in_paren int)) ; return (f, maybe 1 id n) } type Header = [(String, Int)] features :: P Header features = sepEndBy1 feature whitespace hash :: P Char hash = char '#' header :: P Header header = hash >> features read_header_string :: String -> IO String read_header_string fn = withFile fn ReadMode hGetLine parse_header :: String -> String -> Either ParseError Header parse_header fn s = parse header fn s -- Delete 'filename', which is a string. normalize_header :: Header -> Header normalize_header (("filename",1):xs) = xs normalize_header xs = xs mk_features :: Header -> [Feature] mk_features h = let acc i (f, n) = (i+n, (Feature f i n)) in snd (mapAccumL acc 0 h)
http://hackage.haskell.org/package/hmeap-0.8/docs/src/Sound-Analysis-Meapsoft-Header.html
CC-MAIN-2016-22
en
refinedweb
Carsten Ziegeler said the following on 24/5/07 15:57: > hepabolu schrieb: >>>> So you can't rely that you get the namespace attributes in the dom >>>> builder. >> >> I think this is where things go wrong. >> >> Note that both binding file and source are generated with a pipeline >> and pipelineUtil.toDOM. >> >> I've done some debugging into pipelineUtil.toDOM and this is what I >> found: >> - binding file has all the namespaces in pipeline. This is confirmed >> because I can save the output of the pipeline and see the namespaces >> in the root element: >> >> <fb:context xmlns:> xmlns:> xmlns:> xmlns: >> >> - After returning from SourceUtil.toDOM (which uses the default >> DOMBuilder()), the only namespace left is >> fb="". >> Attributes of this node only holds 'path=/oe:version'. >> >> - This is true for the source=pipeline situation as well: only the >> oe="openEHR/v1/Version" is left. >> >> - The source=file situation has all namespaces in the attributes. >> >> I can understand that in the situation of source=pipeline there cannot >> be any matching because the oe namespace is not known in the binding >> file. However, this is also true for the situation of source=file and >> there matching happens on various fb:context until it fails on a >> difference in datatype. >> >> What I also don't understand is the fact that putting the >> source=pipeline through the savedocument function as I did this >> morning, gives me all the namespaces back. >> >> I'm not sure if this helps in the discussion and I have no clue on how >> to solve this. >> >> Anyone? >> > I must say that this is all a little bit strange to me as well. Now, are > you using the prefix oe somewhere in the xml? The prefix fb is definitly > used, so it might be that there is some optimization filtering unused > prefixes? Just a wild guess. Yes. Source is: <oe:version xmlns:oe="openEHR/v1/Version" xmlns:xsi="" xsi: So in fact I want the first line of the binding file to bind to /oe:version I don't think there are unused prefixes in both binding and source. Bye, Helma
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200705.mbox/%3C46559B9E.5000209@gmail.com%3E
CC-MAIN-2016-22
en
refinedweb
This chapter describes how to use the Oracle Business Rules editor that is part of Business Process Composer. It contains a general introduction to Oracle Business Rules and provides tasks for working with them. Note:You cannot create new business rules or rules dictionaries using Business Process Composer. You can edit rules dictionaries that were defined as part of the business catalog of a project template. This chapter includes the following sections: Section 9.1, "Introduction to Oracle Business Rules" Section 9.2, "Introduction to the Business Process Composer Rules Editor" Section 9.3, "Viewing and Editing Business Rules in Business Process Composer" Section 9.4, "Editing Oracle Business Rules at Run Time" Section 9.5, "Assigning a Rule to a Business Rules Task" This section provides a brief introduction to Oracle Business Rules. Business rules are statements that describe business policies or describe key business decisions. For example, business rules include: Business policies such as spending policies and approval matrices. Constraints such as valid configurations or regulatory requirements. Computations such as discounts or premiums. Reasoning capabilities such as offers based on customer value. For example, a car rental company might use the following business rule: IF Rental_application.driver age < 21 THEN modify Rental_application(status: "Declined") An airline might use a business rule such as the following: IF Frequent_Flyer.total_miles > 10000 THEN modify Frequent_Flyer (status : "GOLD") A financial institution could use a business rule such as: IF Application_loan.income < 10000 THEN modify Application_loan (deny: true) These examples represent individual business rules. In practice, you can use Oracle Business Rules to combine many business rules or to use more complex tests. Oracle Business Rules allow process analysts to change policies that are expressed as business rules, with little or no assistance from a process developers. Applications using Oracle Business Rules support continuous change that enables the applications to adapt to new government regulations, improvements in internal company processes, or changes in relationships between customers and suppliers. Business rules follow an if-then structure and consists of two parts: If part: a condition or pattern match. Then part: a list of actions. Alternatively, you can express rules in a spreadsheet-like format called a decision table. The rule IF part is composed of conditional expressions, rule conditions, that refer to facts. For example: IF Rental_application.driver age < 21 The conditional expression compares a business term (Rental_application.driver age) to the number 21 using a less than comparison. The rule condition activates the rule whenever a combination of facts makes the conditional expression true. In some respects, the rule condition is like a query over the available facts in the Rules Engine, and for every row returned from the query the rule is activated. The rule THEN part contains the actions that are run when the rule is fired. A rule is fired after it is activated and selected among the other rule activations using conflict resolution mechanisms such as priority. A rule might perform several kinds of actions. An action can add facts, modify facts, or remove facts. An action can run a Java method or perform a function which may modify the status of facts or create facts. Rules fire sequentially, not in parallel. Note that rule actions often change the set of rule activations and thus change the next rule to fire. A Decision Table is an alternative business rule format that is more compact and intuitive when many rules are needed to analyze many combinations of property values. You can use a Decision Table to create a set of rules that covers all combinations or where no two combinations conflict. In Oracle Business Rules, facts are the objects that rules reason on. Each fact is an instance of a fact type. You must import or create one or more fact types before you can create rules. In Oracle Business Rules a fact is an asserted instance of a class. The Oracle Business Rules run time or a developer writing in the RL Language uses the RL Language assert function to add an instance of a fact to the Oracle Business Rules Engine. In Rules Designer you can define a variety of fact types based on, XML Schema, Java classes, Oracle RL definitions, and ADF Business Components view objects. In the Oracle Business Rules run time such fact type instances are called facts. You can create bucketsets to define a list of values or a range of values of a specified type. After you create a bucketset you can associate the bucketset with a fact property of matching type. Oracle Business Rules uses the bucketsets that you define to specify constraints on the values associated with fact properties in rules or in Decision Tables. You can also use bucketsets to specify constraints for variable initial values and function return values or function argument values. A ruleset is an Oracle Business Rules container for rules and Decision Tables. A ruleset provides a namespace, similar to a Java package, for rules and Decision Tables. In addition you can use rulesets to partially order rule firing. A decision function provides a contract for invoking rules from Java or SOA (from an SOA composite application or from a BPEL process). The contract includes input fact types, rulesets to run, and output fact types. Oracle Business Rules SDK (Rules SDK) provides APIs that let you write applications that access, create, modify, and run rules in Oracle Business Rules dictionaries (and all the contents of a dictionary). The Rules SDK provides the Decision Point API to access and run rules or Decision Tables from a Java application. A dictionary is an Oracle Business Rules container for facts, functions, globals, bucketsets, links, decision functions, and rulesets. A dictionary is an XML file that stores the application's rulesets and the data model. Dictionaries can link to other dictionaries. Oracle JDeveloper creates an Oracle Business Rules dictionary in a.rules file. You can create as many dictionaries as you need. A dictionary may contain any number of rulesets. The Business Process Composer rules editor enables you to view and edit a rules dictionary. Rules dictionaries are displayed in a tabbed window similar to the process editor and data association editor. This window is divided into two main areas: A panel containing tabbed panes for globals, bucketsets, and rulesets. An editor panel showing detailed information for each tab. Figure 9-1 shows the Globals tab displaying information for the globals contained in the Sales Quote example project. Note:This tab only displays globals that were marked as final when the rules dictionary was created. Figure 9-1 The Globals Tab of the Oracle Business Rules Editor Figure 9-2 shows the Bucketsets tab displaying details for the bucketsets contained in the Sales Quote example project. Figure 9-2 The Bucketsets Tab of the Oracle Business Rules Editor Figure 9-3 shows the Rulesets tab displaying details for the rulesets contained in the Sales Quote example. Figure 9-3 The Rulesets Tab of the Oracle Business Rules Editor The following sections provide specific procedures for viewing and editing Oracle Business Rules using Business Process Composer. Oracle Business Rules can be included as part of the reusable business catalog, enabling you to use business rules when editing Oracle BPM projects created from project templates. To open a business rule from the Project Navigator: In the Project Navigator, expand Rules, the expand the rules dictionary containing the rule you want to open. Double-click the rule. The business rule appears in the rules editor. If you want to edit the rule, ensure that the project is in edit mode. By default, the Globals tab is displayed, as shown in Figure 9-1. Using Business Process Composer you can add new bucketsets to a rules dictionary. To add a new bucketset: Open the rules dictionary where you want to edit the bucketset. Select the Bucketsets tab. This displays a table listing the bucketsets in the dictionary as shown in Figure 9-2. Click the Add Bucketset drop-down list, then select the type of bucket set you want to create. List of values List of ranges Select the bucketset from the list, then click Edit Bucketset. Edit the bucket set as required, then click OK. In Business Process Composer, selecting the Bucketsets tab shows you a table listing the bucketsets in the dictionary. To edit a bucketset, select the appropriate row and click the Edit icon. Depending on the type of the bucketset, Range, Enum, or LOV, this displays a corresponding Edit bucketset page. You can create a Range Bucketset by clicking Add in the menu bar and selecting a type. This adds a new row in the Bucketsets table. Adding a bucket automatically adds an end point for a range bucket and a value for an LOV bucket based on the data type. You can modify the newly added bucket end point or value. Note that the alias is modified when an end point or value is changed. To delete a bucketset, select a row and click Delete. To edit a bucketset: Open the rules dictionary where you want to edit the bucketset. Select the Bucketsets tab. This displays a table listing the bucketsets in the dictionary as shown in Figure 9-2. Select the appropriate bucketset row and click the Edit Bucketset icon. Use the Bucketset Editor to edit the appropriate fields in the bucketset. Click OK to confirm the changes. When you open a rules dictionary, Business Process Composer displays the Globals tab. The Globals tab only shows final global variables (global variables with Final option selected). You cannot create or delete global variables. From the Globals tab, in edit mode you can edit the Name, Description, and Value fields. For the Value field, you can use the expression builder to set the value. To view globals: Open the rules dictionary where you want to view globals. Select the Globals tab. This displays a table listing the globals defined for this rules dictionary as shown in Figure 9-1. Using Business Process Composer you can edit, add, and delete rules in a ruleset. To add a rule to a ruleset: Open the rules dictionary containing the ruleset where you want to add a rule. Click the tab of the ruleset you want to edit. This displays a table listing the rulesets defined for this dictionary as shown in Figure 9-3 Click the New Rule icon. Enter a name for the new rule. Click the Show Advanced Settings icon. In the IF area, use the controls, icons, and selection boxes, including the Left Value expression icon, drop-down list for an operator, and Right Value expression icon to modify the condition. In the THEN area for the rule, next to the rule action click Add Action. Click Save in the editor toolbar to save the changes to the rule dictionary. You can use Business Process Composer to open deployed Oracle BPM projects. Opening a deployed project enables you to edit the Oracle Business Rules contained in the project and deploy your changes back to Oracle BPM run time. Note:When editing a deployed project, you can only edit the Oracle Business Rules for that project. You can view other project resources, but cannot edit them. To open a deployed project: From the Project menu select Open a Deployed Project. If you are currently editing a project, your changes are automatically saved. Select Deployed Projects, then select the project you want to open from the project list. Expand Repository, then select the deployed project you want to open. Click Ok. In the Project Navigator, expand Business Rules then expand the rules dictionary where whose Oracle Business Rules you want to edit. Click Edit in the rules editor. Edit the rules as required, then click Save. Click Validate to ensure the changes you made to the business s made are valid. Click Commit to commit the changes to Oracle BPM run time. Click Yes. From the Project menu, select Close Project. The business rules task is an Oracle BPMN element that enables you to incorporate Oracle Business Rules within a process model. When editing a project based on a project template containing business rules in the business catalog, you can assign business rules to business rules task. To assign a business rule to a business rules task: Open the process containing the business rules task you want to edit. Right-click the business rules task, then select Properties. Select the Implementation tab. Click Change. The displays the business rules browser containing a table of available business rules. Double-click a rule from the table. Click OK.
http://docs.oracle.com/cd/E25054_01/user.1111/e15177/business_rules_bpmcu.htm
CC-MAIN-2016-22
en
refinedweb
Introduction InfoSphere Streams is an advanced computing platform that enables user-developed applications to quickly ingest, analyze, and correlate information as it arrives from thousands of real-time sources. With InfoSphere Streams, you can analyze data in motion and can extend the value of existing systems by integrating with different applications. InfoSphere Streams can also process structured and unstructured data. It includes a set of built-in toolkits that simplify application development. These toolkits are reusable components that include operators, types, and functions. The toolkits are broadly categorized as standard toolkits, which contain generic operators, and specialized toolkits, which contain operators to handle domain-specific functions. InfoSphere Streams can connect to InfoSphere Data Explorer using the primitive operator DataExplorerPush, which is part of the IBM big data platform and one of the specialized toolkits that gets installed with InfoSphere Streams. The DataExplorerPush operator enables InfoSphere Streams to push data in the form of records to InfoSphere Data Explorer using BigIndex APIs, as shown in Figure 1. The data pushed to InfoSphere Data Explorer is indexed and used in InfoSphere Data Explorer UIs. Figure 1. Architecture diagram of InfoSphere Streams DataExplorerPush operator What you need to know and software you need to install This article assumes you have the following skills: - A basic knowledge and understanding of InfoSphere Streams infrastructure, Streams Processing Language (SPL), BigIndex APIs, Apache ZooKeeper and InfoSphere Data Explorer: - InfoSphere Streams DataExplorerPushoperator is used to push data in the form of records to InfoSphere Data Explorer using BigIndex APIs. - BigIndex uses InfoSphere Data Explorer as its data store for all content ingested through its APIs. - ZooKeeper is used to configure a cluster environment. BigIndex uses ZooKeeper to maintain InfoSphere Data Explorer's configuration and contains its connection details ( endpoint, username, and password) in the cluster. Ensure the following software is installed and available: - InfoSphere Streams (3.0 or higher)— Installed on a single node or on a cluster. Set the STREAMS_INSTALLenvironment variable to the InfoSphere Streams installation directory. - InfoSphere Data Explorer 8.2.2 or 8.2.3— Installed and running. After InfoSphere Data Explorer is installed, make sure the additional configuration is complete. After installation, you can find the additional configuration details in the ReleaseNotes.txt file. The ReleaseNotes.txt file is in the directory <DataExplorer_Install>/AppBuilder/bigindex/docs/bigindex. - Apache ZooKeeper— Installed and running. The ZooKeeper version for each InfoSphere Data Explorer version is different. Refer to the InfoSphere Data Explorer installation guide for the detailed requirements. - The DataExplorerPushoperator of InfoSphere Streams uses BigIndex APIs to insert records into InfoSphere Data Explorer V8.2.2 and V8.2.3. Therefore, the respective JAR files should be accessible from the InfoSphere Streams server. If the InfoSphere Data Explorer server is not on the same node as InfoSphere Streams, copy the install-dir/AppBuilder/bigindex.zip compressed folder from the node where InfoSphere Data Explorer is installed to a location that can be accessed from InfoSphere Streams. Extract the contents of the bigindex.zip folder . Create or update the configuration of a ZooKeeper namespace After you install InfoSphere Data Explorer and ZooKeeper, you need to create or update the ZooKeeper namespace for InfoSphere Data Explorer with the BigIndex configuration information and the InfoSphere Data Explorer installation information. Note: The import or load operation you are about to perform completely replaces any existing configuration in the ZooKeeper namespace. To import or load the BigIndex entity model file into the ZooKeeper namespace issue the ZooKeeper command: java -jar bigindex-X.Y.Z.jar --properties-file ZooKeeper.properties --import-file configuration.xml --export-to-screen. For detailed steps on how to edit the configuration file and load it to the ZooKeeper namespace, refer to Create or edit the BigIndex configuration and Load the entity model file into the ZooKeeper namespace. Pass the ZooKeeper properties file and the BigIndex entity model file to the command line utility, as shown in Listing 1. Listing 1. Sample properties file servers=testDep.in.ibm.com:1212 namespace=zookeeper You can also pass the ZooKeeper properties directly instead of using the properties file by issuing the command java -jar bigindex-X.Y.Z.jar -n zookeeper -s testDep.in.ibm.com:1212 -i config.xml --export-to-screen. The example configuration file ZookeeperExampleModel.xml is in the directory <DataExplorer_Install>/bigindex/examples/bigindex for InfoSphere Data Explorer V8.2.2 or V8.2.3, as shown in Listing 2. Note: In the following example, two entity types are associated with different collections on the same server. To accommodate multiple servers, define an additional data-explorer-engine-instance and move one of the <serves> nodes under that instead. Listing 2. ZookeeperExampleModel of InfoSphere Data Explorer 8.2.3 <?xml version="1.0" encoding="UTF-8"?> <configuration> <data-stores> <cluster-collection-store <cluster-collection-store </data-stores> <entity-model> <!-- The store-name below should match the name you provide for the store above --> <!-- The name of the entity here should match the entity type in the example --> <entity-definition <entity-definition </entity-model> <data-sources> <data-explorer-engine-instance username="your-username" url=" velocity.exe?v.app=api-soap&wsdl=1&use-types=true&" password="your-password" > <!-- The name here should match the name of any stores you created above --> <serves store- <serves store- </data-explorer-engine-instance> </data-sources> </configuration> Create or edit the BigIndex configuration You can use the ZookeeperExampleModel.xml file to create a new entity model file. Modify the values according to the requirements and save it as config.xml. See the following list of required XML elements. For details, see the documentation in the <DataExplorer_Install>/AppBuilder/bigindex/docs/bigindex directory.> - Edit the XML element data-stores. Assign a collection-nameand a store name. You can use choose any name, for example, zookeeper-collection1and zookeeper-store1. - Edit the XML element entity-model. Assign an entity-definition nameand a store-name. For example, the entity-definition namecan be Tweet. Make sure the store-namematches the store-nameprovided under data-stores. - Edit the XML element data-source. Provide values for data-explorer-engine web url, username, and password, as in this example: url=". Edit the store-name. Make sure the store-namematches the store-nameprovided under data-stores. Choose the shard-numberand total-shardsbased on the requirements of the project and the infrastructure available. Load the entity model file into the ZooKeeper namespace To load the entity model, follow these steps: - For InfoSphere Data Explorer 8.2.2 and later releases, use <DataExplorer_Install>/AppBuilder/bigindex/lib/bigindex.X.Y.Z.jar. - Next, you need to create or update the ZooKeeper namespace using the config.xml file. Pass the configuration file created from the previous section and provide the ZooKeeper port number (the default port is 2181) and server name. Issue the command java -jar bigindex-X.Y.Z.jar -n zookeeper -s testDep.in.ibm.com:1212 -i ~/config.xml - -n: Refers to the Zookeeper namespace. - -s: Refers to the host and port of the ZooKeeper server. - -i: Refers to the configuration file. Note: The following parameters are required by the InfoSphere Streams operator: - The entity-definitionname under the entity-modelfrom the config.xml file needs to be passed to the InfoSphere Streams recordTypeparameter. - The zookeeperNamespaceis passed to the Java™ command in the load model step. In this example, it is zookeeper. Make a note of ZookeeperEndpoints, which are the host and port of the ZooKeeper server. In this example, the host is testDep.in.ibm.comand the port is 1212. java -jar bigindex-X.Y.Z.jar -n zookeeper -s testDep.in.ibm.com:1212 -i ~/config.xml Run a sample InfoSphere Streams application using the DataExplorerPush operator This sample application demonstrates how to use InfoSphere Streams DataExplorerPush operator to push data to InfoSphere Data Explorer: - Set the STREAMS_INSTALLenvironment variable to the InfoSphere Streams installation directory. - Set the BIGSEARCH_JARenvironment variable to the location of the .jar file. For example, for InfoSphere Data Explorer 8.2.2 or 8.2.3, if the .zip file <install-dir>/AppBuilder/bigindex.zip is copied and extracted to /opt/DataExplorer/bigindex, the export command is BIGSEARCH_JAR='/opt/DataExplorer/bigindex/lib/bigindex-x.jar. Replace x with the right version of the BigIndex .jar file. - The DataExplorerPushsample application can be found in the $STREAMS_INSTALL/toolkits/com.ibm.streams.bigdata/samples/DataExplorerPush directory. The contents of the directory include the Streams Programming Language (SPL) source file, the makefile, info.xml, and the subdirectories data and etc. The etc directory contains the connections.txt document. - Using the following command, create a new directory — sample1, for example — in your home directory and copy the DataExplorerPushsamples to this directory. cp -R $STREAMS_INSTALL/toolkits/com.ibm.streams.bigdata/samples/DataExplorerPush $HOME/sample1/ - Edit etc/connections.txt file with information about your connection to InfoSphere Data Explorer. Provide the ZooKeeperNamespace and ZooKeeperEndpoints. - Edit the SPL source file. Update the recordTypeparameter. This is the same as the entity-definitionname. - Build the sample by running make. By default, the sample application is compiled as a distributed application. To compile the application as a stand-alone application, run make standalone: - If you compiled the sample as stand-alone, use the following command to run the sample application: ./output/bin/standalone. - If you compiled the sample as distributed, use the following command to run the sample application: streamtool submitjob DataPushExplorerMain.adl -i instancename. - To create an InfoSphere Streams instance, use the streamtool mkinstancecommand. For detailed instructions for creating a Streams instance, run streamtool help. View the results of the sample application The Tweet.txt file in the data folder is read by the FileSource operator, and tuples are sent to the DataExplorerPush operator. Records are created from these tuples on a per-tuple basis and are pushed to InfoSphere Data Explorer. To view the indexed records in InfoSphere Data Explorer console, follow these steps: Note: Confirm that you have completed the additional configuration steps described in the ReleaseNotes.txt file, which is in the <DataExplorer_Install>/AppBuilder/bigindex/docs/bigindex directory. If these extra configuration steps are not performed, you cannot use the admin tool to search InfoSphere Data Explorer instances that contain BigIndex data. BigIndex data is stored in Data Explorer arenas, which are not supported in the end-user display. However, you can search one arena at a time by using the method describe in the InfoSphere Data Explorer ReleaseNotes.txt file. - After you enable the InfoSphere Data Explorer with the workaround to search the data stored in arenas, log into the InfoSphere Data Explorer console at. - Select Search Collections under the administration tool on the left pane of the admin console. Figure 2. Search collections admin tool - Click the collection name used in the InfoSphere Streams application. Figure 3. Names for search engine collections - On the left pane of InfoSphere Data Explorer admin console page, click query-meta. Figure 4. Test project with query-meta link - Modify the URL by appending &arena=<recordType>. In this example, because the recordType is Tweet, append &arena=Tweetto the URL. The sample URL is similar to &arena=Tweet. The result page lists all the records for the selected arena. Figure 5. Tweets in a results window Summary InfoSphere Streams can integrate with InfoSphere Data Explorer using the InfoSphere Streams DataExplorerPush operator. This article describes the steps to enable the integration and it shows how to run InfoSphere Streams samples to push data and view the results in the InfoSphere Data Explorer web console. Acknowledgements Thanks go to Manasa K. Rao and Scott Linder for their review and help. Resources Learn - Find out more about InfoSphere Streams in the Information Center for each version: 3.1 and 2.1. - For more information about InfoSphere Data Explorer 8.2, visit the Information Center. - Get started with InfoSphere Streams with these developerWorks resources on Streams. - Watch developerWorks demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. - Learn more about big data in the developerWorks big data content area. Find technical documentation, how-to articles, education, downloads, product information, and more. - Stay current with developerWorks technical events and webcasts. - Follow developerWorks on Twitter. Get products and technologies - Download InfoSphere Streams Quick Start Edition, a complimentary, downloadable, non-production version of InfoSphere Streams. - Build your next development project with IBM trial software, available for download directly from developerWorks. Discuss - Ask questions and get answers in the InfoSphere Streams forum. - Get involved in the developerWorks Community. Connect with other developerWorks users while you explore developer-driven blogs, forums, groups, and wikis. -.
http://www.ibm.com/developerworks/library/bd-streamsexplorer/index.html
CC-MAIN-2016-22
en
refinedweb
I'm trying to utilise pointers so I can get using them more. so far this is how I've laid my code out. #include <iostream> #include <cstdlib> using namespace std; int main() { int * p; int amountOfNumbers = 0; p = new int [11]; cout << "Please enter the amount of numbers you would like" << endl; cin >> amountOfNumbers; for(int i = 0; i < amountOfNumbers; i++) { p[0] = 0; p[1] = 7; for(int n = 3; n < 11; n++) { p[n] = rand() % 9; } } system("pause"); return 0; } It's not really finished. Basically what I want it to do is, ask the user how many numbers they want, then a for loop to randomly generate a number between 0 - 9 and assign it to the next array and so fourth. Obviously making sure that each number starts off with 07. Can any one help or suggest a better way of going about it.
http://www.dreamincode.net/forums/topic/316201-random-phone-number-gen/page__pid__1823521__st__0
CC-MAIN-2016-22
en
refinedweb
This scratchpad will let us keep track of who's doing what, etc. If you're working on a draft or have bits and pieces of something finished, link it into here under the proper section so that we can keep track of everything. Who: Frank Manola Write up the RDF model: intro, background (URIs, XML, namespaces), RDF model Who: Eric Miller, Dan Brickley Work on the RDF Schema section. Who: Sean B. Palmer, Aaron Swartz, Eric Miller Provide a user scenario to put it all together. Likely will be photo metadata. Who: All Collect various hints and tips for collecting into the primer / RDF cookbook.
http://www.w3.org/2001/09/rdfprimer/todo
CC-MAIN-2016-22
en
refinedweb
Subject: [OMPI users] dlopening openmpi libs (was: Re: Problems in 1.3 loading shared libs when usingVampirServer) From: Olaf Lenz (lenzo_at_[hidden]) Date: 2009-03-23 15:46:18 Hi! Sorry for taking up this old thread, but I think the solution is not yet satisfactory. To summarize the problem: OpenMPI has a plugin architecture. The plugins rely on the fact, that the OpenMPI library is loaded into the global namespace and are accessible to the plugins. If the mpi lib is dynamically loaded into a private namespace (as for example when using it in a python module), the plugins can't find the symbols of the library. So far, the suggested solution is, that the OpenMPI users should open libmpi.so into the global namespace to avoid the problem, or to compile OpenMPI using --enable-shared --enable-static. Both approaches have their problems, that I detail below. What I do not really get is why not to solve the problem on the side of OpenMPI. As far as I see it, this problem has already been discussed here: and the solution that is described there still looks as though it should still work now, or shouldn't it? Just link all the OpenMPI plugins against the base OpenMPI libraries, and it should work. Or am I wrong? The problems with the suggested solutions: * Opening libmpi into the global namespace has exactly the problems that come with loading symbols into the global namespace. After all, there is some sense in not putting all symbols into the global namespace... * Furthermore, it requires the modification of the program/plugin loading the mpi library. In some cases, it might not be simple to do this modification, as it would have to be done in a package outside of the scope of the user. After all, some packages might decide better to ignore OpenMPI than to adapt their code to OpenMPI. So, I think it would be the best solution if OpenMPI would try to be as compatible to other MPI implementations as possible. * On many medium-size clusters, it is not easily possible for a user to install their own version of MPI, and the admins are often reluctant to install anything which is not of the shelf. Therefore, if it is necessary to compile OpenMPI with non-default flags to make it work with some plugin-enabled programs, I would guess, that this simply won't happen on many of this type of clusters. Olaf
https://www.open-mpi.org/community/lists/users/2009/03/8551.php
CC-MAIN-2016-22
en
refinedweb
sem_open - initialize and open a named semaphore (REALTIME) [SEM] #include <semaphore.h>#include <semaphore.h> sem_t *sem_open(const char *name, int oflag, ...); The sem_open() function shall establish(), [TMO] sem_timedwait(),sem_timedwait(), sem_trywait(), sem_post(), and sem_close(). The semaphore remains usable by this process until the semaphore is closed by a successful call to sem_close(), _exit(), or one of the exec functions shall refer to the same semaphore object, as long as that name has not been removed. If name does not begin with the slash character, the effect is implementation-defined. The interpretation of slash characters other than the leading slash character in name is implementation-defined. If a process makes multiple successful calls to sem_open() with the same value for name, the same semaphore address shall be returned for each such successful call, provided that there have been no calls to sem_unlink() for this semaphore, and at least one previous successful sem_open() call for this semaphore has not been matched with a sem_close() call. References to copies of the semaphore produce undefined results. Upon successful completion, the sem_open() function shall return the address of the semaphore. Otherwise, it shall return a value of SEM_FAILED and set errno to indicate the error. The symbol SEM_FAILED is defined in the <semaphore.h> header. No successful return from sem_open() shall return the value SEM_FAILED. If any of the following conditions occur, the sem_open() function shall, <semaphore.
http://pubs.opengroup.org/onlinepubs/009604499/functions/sem_open.html
CC-MAIN-2016-22
en
refinedweb
These two streams are carriers of data to a destination. Both look alike, but they differ in some finer concepts. Basically, the OutputStream class (super class of PrintStream) is an abstract class, JDK 1.0, that includes methods to write a stream of bytes in a sequential order The Writer class (super class of PrintWriter) is an abstract class, JDK 1.1, that includes methods to write a stream of characters in a sequential order PrintStream (of PrintStream vs PrintWriter) It is a byte stream introduced with JDK 1.0. The class is derived from FilterOutputStream. Being a byte stream it can be chained to another high-level byte stream for greater affect or combined functionality. The print stream object can be used to carry any data to any output stream. Following is the class signature public class PrintStream extends FilterOutputStream implements Appendable, Closeable The following code writes data types to the file pqr.txt. Output screenshot of PrintStream vs PrintWriter The above code prints at DOS prompt. Generally all I/O streams throw IOException, but PrintStream does not. This facility is provided as PrintStream is used to write data to standard output stream (System.out) very often in the program; to avoid the mess of exception handling, it is just a convenience. PrintStreamm object can be created to flush data automatically whenever println and print etc. methods are called. The extra parameter flag true indicates automatic flush. All the characters printed using PrintStream are converted internally into bytes as per the platform default character encoding. PrintWriter (of PrintStream vs PrintWriter) PrintStream is used to write data in bytes and PrintWriter is used where it is required to write data in characters. PrintStream is a byte stream (subclass of OutputStream) and PrintWriter is a character stream (subclass of Writer). Following is the class signature This class includes all the print methods of PrintStream but does not include the methods of drawing raw bytes. Most of the methods of PrintStream and PrintWriter do not throw exceptions but some constructors (not all) throw FileNotFoundException. The programmer, if required, can check any exceptions state with checkError() method. PrintStream is used for binary output (doing extra conversion) where as PrintWriter is for text (character set) output. The PrintStream uses platform dependent character encoding which increases platform dependency issues that causes sometimes problems when moving from one platform to another. With PrintWriter, typically we can specify, the encoding and thus can avoid platform dependencies. Internally, PrintStream converts bytes to characters as per encoding rules of the OS. But PrintWriter does the job straightaway (no char-byte conversion). PrintStream stream = new PrintStream(output); The above statement uses default encoding. PrintWriter writer = new PrintWriter(new OutputStreamWriter(output, "UTF-8")); With PrintWriter constructor, encoding UTF-8 is used; not platform dependent. This is the advantage with PrintWriter where the programmer can control the encoding style. From JDK 1.4, Java API allows to specify character encoding with PrintStream also. Thus, the difference became very narrow but for auto flushing. With PrintWriter also, when auto flush is enabled, auto flushing is done whenever print methods are called. Like print streams, many groups exist in Java. A few are given for your reference. 1. Wrapper classes 2. Listeners 3. Adapters 4. Layout managers 5. Byte streams 6. Character streams 7. Thread groups 8. Filter streams 9. Wrapper streams 10. Components 11. Containers
http://way2java.com/io/printstream-vs-printwriter/
CC-MAIN-2016-22
en
refinedweb
On 06/27, Satyam Sharma wrote:>> Thanks for your comments, I'm still not convinced, however.An perhaps you are right. I don't have a very strong opinion on that.Still I can't understand why it is better if kthread_stop() sends asignal as well. Contrary, I believe we should avoid signals when itcomes to kernel threads.One can always use force_sig() or allow_signal() + send_sig() whenit is really needed, like cifs does.> On 6/26/07, Oleg Nesterov <oleg@tv-sign.ru> wrote:> >> .>> Anyway, I think _all_ usages of kthread_stop() in the kernel *do* want> the thread to stop *right then*. After all, kthread_stop() doesn't even> return (gets blocked on wait_for_completion()) till it knows the target> kthread *has* exited completely.Yes, kthread_stop(k) means that k should exit eventually, but I don'tthink that kthread_stop() should try to force the exit.> And if a workqueue is blocked on tcp_recvmsg() or skb_recv_datagram()> or some such, I don't see how that flush_workqueue (if that is what you> meant) would succeed anyway (unless you do send the signal too),timeout, but this was just a silly example. I am talking about the casewhen wait_event_interruptible() should not fail (unless something badhappens) inside the "while (!kthread_should_stop())" loop.Note also that kthread could use TASK_INTERRUPTIBLE sleep because itdoesn't want to contribute to loadavg, and because it knows that allsignals are ignored.> Note that the exact scenario you're talking about wouldn't mean the> kthread getting killed before it's supposed to be stopped anyway.Yes sure, we can't kill the kernel thread via signal. I meant we can havesome unexpected failure.> >?I think this "if(tsk)" is just bogus, and should be killed.Oleg.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/6/27/100
CC-MAIN-2016-22
en
refinedweb
Cancer Trading binary options pdf. A prospective monopolistic competitor binnary finds binary option brokers in india relatively easy to start production because very little capital binary options brokers no deposit bonus no great technical know-how are required to open a small gasoline station, grocery store, barber shop, etc. S0005 Page 1927 282 Risk-Taking and Sexual Behavior prostitution, premarital sex, and polygamy, among other sexual activities, are condemned in some socie- ties, tolerated in others, and championed in still others. How hippocampal activity normally acts to influence the association of context and fear remains to bi nary understood. Some of the most significant work on education has used it as a moderator of other predictors in the context of tests between competing theories.Lancioni, L. Although little direct evidence points binary options brokers no deposit bonus how these effects might relate to the sex differences in cognitive function, there is binary options affiliate forum reason to suppose that at least some sex differences are related to go- nadal hormones. TheInsuranceInstituteForHighway Safetyconductedastudyofthefederalgov- emnmentsFatalAccidentReportingSystem binary options brokers no deposit bonus thatdriverfatalitiesinfrontalcollisionswere loweredby28percentinautomobiles equippedwithairbags. 5 Acetone TPA Binary option application. Cancer Genet. DuPaul and Katy E. Now we can declare objects to be of type MyClassint or even MyClass complexdouble. For men, brrokers differences lead binray a morality based on equal on and acceptance of abstract principles, even at binary options brokers no deposit bonus sacrifice of peoples well-being. How does the state of the economy affect the market supply of la- bor. Trading forex with binary options is a pathway to the pineal body (4), which controls long-term circadian rhythms. A checklist or binary option zero risk can be provided for each student to keep on his or her desk. 17 11 23, Evaluation, and Development Criterion development and measurement fx binary option been an issue of ongoing debate b rokers IO psychologists. Thus, optiions need to modify and strengthen patients coping skills broker to modify the environment to meet their needs came to the fore. 8304 0. Operator). 1, pp. Second, G. For example, between 1971 and 1996 in Great Britain, the proportion of part-time employees nearly doubled from 15 binary options brokers no deposit bonus 29 of the workforce. Awt. Such phrases, part of bonu s socialization of children in a spe- cific culture or geographical areas within a large depoosit try, often do not have equivalents in other languages. Chromosoma, E. Responding to Controls Except for labels, which are passive controls, and expectations are addressed from both educational and psychological perspectives. Although neurological and genetic substrates appear to be compromised, multiple interacting factors are likely involved in the expression of ADHD. Inferring Options from Broker s in Behavior Learning and remembering new information must entail some kind of change in the cells of the nervous system. EGS RNA, when complexed with binary options brokers no deposit bonus binary trading erfahrungen RNA, generates a structure that resembles amodel substratederived from atRNAprecursor, thereby rendering the target RNA susceptible to cleavage by RNase P. An opions of the power of binar programs was strength training for 90-year-olds at the Hebrew Rehabilitation Center for the Aged in Boston. 365384. Folia Biol. Further weakening its predictive power, witness confidence is highly malleable. 3 × Bons. If these 109 cells are scattered throughout the organism optoins than in a single locus, obviously there will be no clinical detection of the neoplastic disease. The remaining 161 entries would be the most frequently used pairs of characters. From the sample quantization table shown in Table 13. Bro kers, Sherman, L. Philadelphia Saunders, 1986. This is an Achilles heel rarely assessed binary options brokers no deposit bonus papers. The competitive elements result from the large number of firms and the easy en- try. At this point, the team evalu- ates the need or problem with regard to change over time, with the goals and objectives serving as bars for measuring relative change andor overall success. Work and disability. But if we truncate the binary options brokers practice account, is the resulting code still unique. Distal binary options indicator cues are objects that are at some distance away from a cache site, and landmarks are cues that are in close prox- imity to the cache site. In Layer I coding a separate scalefactor is selected for each block of 12 samples. Doing so gives you the ability to govern precisely how elements are stored within sorted collections and maps. Systematic rational restructuring aims at helping test-anxious clients to discover the worrisome task-irre- levant thoughts they entertain during tests, to eclipse such thoughts, and to substitute positive self-statements binary options brokers no deposit bonus redirect their attention to the task at hand. Size()-1 is used for the last. Unlike caffeine, hallucinogens binary options brokers no deposit bonus develop into physical dependence, whereas users may psychologi- cally depend on hallucinogens for euphoric and psyche- delic effects; consequently, cross-tolerance frequently exists with LSD, psilocybin, and mescaline. A mastery experience involves executing a behavior that leads to a desired goal state, in Brocas area, op tions neuron may be active during word perception binary options safe brokers its binaryoptions signals com during word pro- duction. Cancer Inst. 4 Boundary binaryoptionsystems info The boundary conditions are imposed after each time step by allotting an appropriate boundary condition code to a side (see mesh data). (f) Considering brokeers initial conditions given in (a), you do not know when or where finalize( ) will binary options brokers no deposit bonus executed. Most obviously, in a conflict of long standing, various individuals, groups, and organi- zations (e. java". In achievement situations, learn- ers may ask themselves questions such as the following Why did I get an A on my biology test. include iostream using namespace std; int main() { int a 10; a a; a a; cout a endl; return 0; } 2. Environmental psychologists believe in the principle that the environment works better if it is designed and planned in such a way that it fits the needs of the people affected by its changes. Sensory qualities of foods themselves may override our bodies hunger and satiety cues, resulting in our eating either more or binary options brokers no deposit bonus of the foods. Suppose the. New York McGraw-Hill, 2000. Mesulam, the field of applied psychology has been negligent in its delivery of services to racial and ethnic 211 2004 Elsevier Inc. 192.Bulcavage, L. From this definition, concerted efforts toward educating populaces for effective democratic citizenship would be the most powerful bгnus against violence and investment in social capital that the leaders br okers a nation could make. 02 0. (2004). Sci. Depлsit, and Loft, S. Significant changes on theoretical views binay this phenomenon were introduced by Rescorla optionss Wagner in 1972, whose model assumed that organisms, in their interaction with the environment, form causal expecta- tions that allow them to predict relations between events. 3-16. Dis. Kim, J. Decode the code you obtained to verify binary option box review your encoding was correct. Obviously, there is a substantial amount of correlation in this text. Knowledge base stationary and procedural Strategies and action programs Motivation Evaluation and activation f0010 Page 747 s0025 performance. 1, pp. However, PET studies of cognitive function also have certain lim- biary. 1 Using erythrocyte cholinesterase, opt ions legal jurisdictions severely limit the intro- duction binary options industry news memories recovered through hypnosis due to a concern that such evidence might be binary options brokers no deposit bonus. 70a) Page 350 338 8 SOLUTION TECHNIQUES OF RADIATIVE TRANSFER THEORY I(UT -7d, t 4) )Id(T -7d. -lead binary options brokers no deposit bonus activation of the p53 tumor suppressor gene. It is not surprising, then, when poor outcomes are found to follow these bad things. Damasio.optio ns are considered to be appropriate inputs. 5;b) 1. 2, pp. Horn (Ed. Binary options brokers no deposit bonus, with shared variances of 5 to 15, the link between conservation behavior and its environmental impact is far from perfect. Binary options brokers list, human consciousness, agency, meaning, and goals are considered binary options brokers no deposit bonus be central constructs in explain- ing human behavior (or mind). There,thepremixedpasteis subjectedtohigh-speedagitationbyacircu- lar,toothedbladeattachedtoarotatingshaft. Washington, DC American Psychiatric Press. IO psychologists will need to address binary options brokers no deposit bonus performance of work in the context of the globalization of the econ- omy with a workforce that is more culturally no than it was in the past. (6) The action of phenyl magnesium bromide on ethyl 2- fluoroethoxypropionate gave, deopsit the expected diphenyl-2-2- fluoroethoxyethylcarbinol, but rather surprisingly the corre- 2 sponding bromo compound (p. If the limit does not exist, the integral is said to diverge and there is binary options brokers no deposit bonus Laplace transform defined for f. Zinkin). For instance, a basketball players dribbling skills may be improved by practicing dribbling two balls at once while looking ahead. The Binary options brokers no deposit bonus of the Binary options brokers no deposit bonus in Memory Even though H. Self-questioning for understanding), K. 1 Binary options brokers no deposit bonus, m PBS ) 5 Add 150 pL of primary antibody, diluted 1 Binary option box review m blocking solution, and mcu- bate for 30 min, 37°C For primary p24 antibody, the author uses polyclonal, monospecific rabbit antiserum, from a rabbit immumzed with recombinant on (19) Negative antibody is either serum from the rabbit before unmun~zation or from a rabbit immunized with an irrelevant protein 6 Wash 5X with PBS, then let sit m PBS for 20 mm 7 Add 150 pL of secondary antibody (Vectastam antirabbit IgG biotmylated antibody) Use as a 1. Potions chimeric protein con- taining only 11 amino acids different from native Gtα functioned essentially the same as native Gtα and could be purified in large quantities (4). Cell pellets can be frozen in liquid nitrogen and stored at 80°C, or they can be further processed for membrane preparation. Mental retardation 3. Binary options brokers no deposit bonus, 79448453, 1990. Probability Codeword 0. The reference field therefore has the binäre option broker vergleich B ̄ By0(0)yˆ Bz0zˆ B0(0)ˆζ for all values of x. The animals sit alone; seldom if ever engage in social grooming or contact with other monkeys; and, in a free- ranging natural environment, become solitary, leaving the troop altogether. Some regions receive information from sensory systems, other regions command movements, and still other regions are the sites of connections between the sensory and the motor areas, enabling them to work in concert. The following example illustrates the split-brain phenomenon Patient N. The ISP coefficients used in the the other three subframes are obtained by interpolating the coefficients in the neighboring subframes. Options . AddItem("Vanilla"); jcb. Because of the optiгns, a discussion of class scope (and variables declared within it) is deferred until Chapter 6, when classes are described.M. There has ьptions some criticism concerning the stud- ies of emotional recognition centered around the fact that both stimuli and elicitation of responses may lack ecological validity. This technology allowed considerably longer maintenance times as well as enhanced biochemical and morpho- logical differentiation of these cells in primary culture. Thus, motivation is binary options brokers no deposit bonus aided when goals denote general outcomes (because nearly any action will satisfy them), are temporally distant (it is easy to put off until tomorrow what does not ibnary to be done today), and too difficult or too easy (people are not motivated to attempt the impossible and may procrastinate completing easy tasks).Semenik, R. Are old people more depressed. Elevated enzyme activity represented by () sym- bol. THE SOLUTION QUESTIONING TECHNIQUES TO MINIMIZE SUGGESTIBILITY Research on questioning witnesses brokeers victims in a foren- sic context has focused on ways in which to avoid suggest- ibility. 347358. This type of mirror is now commonly seen on officebuildings. Kleber, 1982. Moscovitch. 168) and from (3. Brain ьptions for edu- cators and psychologists. On the other hand. Page 102 PHARMACOLOGICAL TREATMENT OF SCHIZOPHRENIA A REVIEW 83 Novel antipsychotics are increasingly used as drugs of first choice. Introduction 2. This observation leads us to speculate that differences in cerebral organization, whether genetically or environmentally derived. Page 455 a0005 Conservation Behavior Florian G. Determinants of Behavior 7. Percentage of subjected residences Above 16 floor 10-15 floor 100 80 60 40 20 0 Fig. Now children are able to mentally manipulate concrete ideas such as volumes of liq- uid and dimensions of objects. Рptions, 24 227235. 12). This finding correlates well with the known epidemiological phenomenon of the marked enhancement of bronchogenic carcinoma in humans exposed to asbestos who are smokers as compared with those smokers not exposed to asbestos (Chapter Nг. Science(Washmgton,DC)244,707-7 12 10 Slamon,D. A analogs Vit. ; import javax. 1092 0. (1995) Vulnerability to delusions over time in schizophrenia and affective disorders.McWhorter, K. This means even fewer of the more genuinely treatment-resistant patients are receiving clozapine or even any other of the how to trade binary options agents. Asdetailedonp. This set of assumptions requires that much of the social interaction-as well as the cognitive interaction-about the recall bro kers be in the hands of the witness.Binary option trading without deposit
http://newtimepromo.ru/binary-options-brokers-no-deposit-bonus-21.html
CC-MAIN-2016-22
en
refinedweb
Red Hat Bugzilla – Bug 738193 rhn_check fails when RHN channels are changed Last modified: 2012-02-21 01:29:14 EST Description of problem: rhn_check events from 'packages.' namespace fail, when the caching file (rhnplugin.repos) contains outdated information. Version-Release number of selected component (if applicable): yum-rhn-plugin-0.5.4-22.el5_7.2 How reproducible: always Steps to Reproduce: 1. Register system with RHN Hosted to the base channel (e.g. rhel-i386-server-5) 2. # yum repolist 3. Register system with RHN Satellite without any channel 4. Schedule 'Update Package List' on RHN Satellite webui 5. # rhn_check -vv Actual results: Fatal error in Python code occured yum.Errors.RepoError: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-i386-server-5. Please verify its path and try again Expected results: rhn_check tool should not crash, even in case the rhnplugin.repos contains outdated information. Additional info: Created attachment 523112 [details] snippet from /var/log/up2date Regression against earlier rhel5 releases (rhel5.5 and older). Having this issue as well. I've tried to manually clean up /var/cache/yum to force an update of the rhnplugin.repos but I am unable to get a usable rhnplugin.repo file. Has anyone managed a work-around? Thanks, Henry Edit: This is on RHEL6.1 (In reply to comment #5) > Edit: > > This is on RHEL6.1 More debugging -- turning off SSL in /etc/sysconfig/rhn/update lets rhn_check succeed and everything else work as expected. My issues appear to be duplicates of #692118 and corresponding Fedora bugs: 738566 - python-urlgrabber 738367 - yum 738568 - anaconda Added RHTS keyword. QA would like to have an automated test for this issue. Well, currently, the issue might be exposed by our automation, when series of different tests make a use of different channel sets. However, having separate test for this issue, would be a preferred. That way, would would minimize the risk that test cases gets lost by yum-clean-like workarounds. Added qa_ack+ as well. The issue has been already fixed by z-stream errata package yum-rhn-plugin-0.5.4-22.el5_7.2.noarch. The issue has been addressed as a part of bug 734965 and bug 735.
https://bugzilla.redhat.com/show_bug.cgi?id=738193
CC-MAIN-2016-22
en
refinedweb
24 October 2012 16:23 [Source: ICIS news] HOUSTON (ICIS)--Dow Chemical’s steps to debottleneck existing capacities and build new plants on the US Gulf coast will go ahead even as the company moves to shut down some plants and cut jobs, the CEO of the US-based chemicals major said on Wednesday. Dow said this week it plans to eliminate about 2,400 positions and shut down about 20 plants. “I want to be very clear: we are not abandoning all growth projects,” Andrew Liveris told analysts during Dow’s third-quarter results conference call. Rather, given the current slow economic environment, Dow will be taking “a more near-term and pragmatic approach” to its investments, Liveris said. Liveris said that Dow’s restart of a cracker in ?xml:namespace> Furthermore, Dow’s propane dehydrogenation (PDH) project in Texas will not be affected, either, and work is to continue on a planned world-scale ethylene project in “Our US Gulf coast investments are high-return projects that will significantly strengthen Dow’s profitability, lifting margins for downstream businesses such as performance plastics, performance materials, and coatings,” Liveris added. Further afield, Dow’s Sadara petrochemicals project in Liveris also said that Dow is looking at using the tax-advantaged US "Master Limited Partnership" (MLP) structure as it works on new olefins investments. “It could be quite favourable to do [MLP] on new projects, we are open, and if this is a genuine mechanism that can apply to us we will deploy it,” he said. However, Dow will be “dialling back spending in programmes and industries where policy and industry fundamentals have altered the value proposition,” he said. Liveris pointed in particular to the alternative energy sector. “We believe that the world of alternative energy is going to dial down, because the world cannot afford alternative energy and subsidies right now,”
http://www.icis.com/Articles/2012/10/24/9607050/dow-chemicals-us-gulf-coast-projects-not-hit-by-cutbacks-ceo.html
CC-MAIN-2014-10
en
refinedweb