text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Content-type: text/html
XmTextFieldClearSelection - A TextField function that clears the primary selection
#include <Xm/TextF.h>
void XmTextFieldClearSelection (widget, time)
Widget widget;
Time time;
XmTextFieldClearSelection clears
the primary selection in the TextField widget.
Specifies the TextField widget ID.
Specifies the time at which the selection value is desired.
This should be the time of the event which triggered this request.
For a complete definition of TextField and its associated resources,
see
XmTextField(3X).
XmTextField(3X)
|
http://backdrift.org/man/tru64/man3/XmTextFieldClearSelection.3X.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
#include <IndicesFromValues.h>
This class returns the indices given a list of values.
Global values, in which the input values are searched.
Output indices of the given values, searched in global.
Output indices of the other values, (NOT the given ones) searched in global.
if set to true, output are indices of the "global" data matching with one of the values
input values.
|
https://www.sofa-framework.org/api/master/sofa/html/classsofa_1_1component_1_1engine_1_1_indices_from_values.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
- ODBC Drivers
- Java JDBC Drivers
- ADO.NET Providers
- SQL SSIS Components
- BizTalk Adapters
- Excel Add-Ins
- Power BI Connectors
- Delphi & C++Builder
- Data Sync
- API Server
Ready to get started?
Learn more about the CData ODBC Driver for SAP Hybris C4C or download a free trial:
Linux/UNIX 上のPython からSAP Hybris C4C Data にデータ連携。CData ODBC Driver for SAP Hybris C4C を使って、Linux/UNIX 上のPython アプリケーションからSAP Hybris C4C SAP Hybris C4C とpyodbc module を使って、簡単にSAP Hybris C4C、SAP Hybris C4C SAP Hybris C4C ...
List the Defined Data Source(s)
$ odbcinst -q -s CData SAPHybrisC4C Source ...
To use the CData ODBC Driver for SAP Hybris C4C with unixODBC, ensure that the driver is configured to use UTF-16. To do so, edit the INI file for the driver (cdata.odbc.saphybrisc4c.ini), which can be found in the lib folder in the installation location (typically /opt/cdata/cdata-odbc-driver-for-saphybrisc4c), as follows:
cdata.odbc.saphybrisc4.
SAP Hybris Cloud for Customer uses basic authentication. Set the User and Password to your login credentials.
/etc/odbc.ini or $HOME/.odbc.ini
[CData SAPHybrisC4C Source] Driver = CData ODBC Driver for SAP Hybris C4C Description = My Description User = user Password = password
For specific information on using these configuration files, please refer to the help documentation (installed and found online).
You can follow the procedure below to install pyodbc and start accessing SAP Hybris C4C through Python objects.
pyodbc のインストール
You can use the pip utility to install the module:
pip install pyodbc
Be sure to import with the module with the following:
import pyodbc
Python でのSAP Hybris C4C Data への接続
You can now connect with an ODBC connection string or a DSN. Below is the syntax for a connection string:
cnxn = pyodbc.connect('DRIVER={CData ODBC Driver for SAP Hybris C4C};User=user;Password=password;')
Below is the syntax for a DSN:
cnxn = pyodbc.connect('DSN=CData SAPHybrisC4C Sys;')
Execute SQL to SAP Hybris C4CPHybrisC4C Source;User=MyUser;Password=MyPassword') cursor.execute("SELECT ObjectID, AccountName FROM AccountCollection WHERE AccountName = 'MyAccount'") rows = cursor.fetchall() for row in rows: print(row.ObjectID, row.AccountName)
You can provide parameterized queries in a sequence or in the argument list:
cursor.execute( "SELECT ObjectID, AccountName FROM AccountCollection WHERE AccountName = ?", 'MyAccount',1)
Insert
INSERT commands also use the execute method; however, you must subsequently call the commit method after an insert or you will lose your changes:
cursor.execute("INSERT INTO AccountCollection (AccountName) VALUES ('MyAccount')") cnxn.commit()
Update and
As with an insert, you must also call commit after calling execute for an update or delete:
cursor.execute("UPDATE AccountCollection SET AccountName = 'MyAccount'") Hybris C4C data, using the CData ODBC Driver for SAP Hybris C4C.
|
http://www.cdata.com/jp/kb/tech/saphybris-odbc-python-linux.rst
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Create Basic Routing Integrations
You create an integration that provides a template with empty trigger and invoke connections in which to add your own adapters. You can also create a single routing expression and request and response enrichments, as needed. You cannot create multiple routing expressions. If your integration requires this feature, create an orchestrated integration.
Topics:
Create a Basic Routing Integration
Note:The Basic Routing integration style has been deprecated. Oracle recommends that you use the App Driven Orchestration integration style, which provides more flexibility.
Add a Trigger (Source) Connection
Add an Invoke (Target) Connection
Add Request and Response Enrichments
Delete Request and Response Enrichments
Create Routing Paths for Two Different Invoke Endpoints in Integrations
Create Routing Expression Logic in Both Expression Mode and Condition Mode
-
Create a Basic Routing Integration
This section describes how to create a basic routing integration.
- Follow the steps in Create Integrations to create a basic routing integration.
An integration canvas with empty trigger and invoke connections is displayed.
Add a Trigger (Source) Connection
The trigger (source) connection sends requests to Oracle Integration. The information required to connect to the application is already defined in the connection. However, you still must specify certain information, such as the business object and operation to use for the request and how to process the incoming data.
- In the Integration Designer, drag a connection from the Connections or Technologies panel on the right to the Trigger (source) area on the canvas.
Description of the illustration ics_canvas.pngThe Adapter Endpoint Configuration Wizard for the selected connection is displayed. The pages in the wizard that appear are based on the adapter you selected. See Understand Trigger and Invoke Connections.
Add an Invoke (Target) Connection
Oracle Integration sends requests or information to the invoke (target) connection. The information required to connect to the application is already defined in the connection. However, you still must specify certain information, such as the business object and operation to use for the request and how to process the data.
- In the Integration Designer, drag a connection from the Connections or Technologies panel on the right to the Invoke (target) area on the canvas.
Description of the illustration ics_target_canvas.pngThe Adapter Endpoint Configuration Wizard for the selected connection is displayed. The pages in the wizard that appear are based on the adapter you selected. See Understand Trigger and Invoke Connections.
- After you configure the connection, the Summary page appears.
- Click Done, then click Save.
Description of the illustration ics_integ_canvas.png
Add Request and Response Enrichments
When you create an integration, you also have the option of adding both request and response message enrichment points to the overall integration flow. Enrichments participate in the overall integration flow and can be used in the request and/or response payloads between the trigger and invoke.
- Design an integration with trigger and invoke connections and request and response mappings. For this example, the integration looks as follows when complete. Note the two enrichment point circles in the design; one appears on the inbound (request) side and the other appears on the outbound (response) side.The request and response mappings for this example are as follows:
- From the Connections panel on the right, drag an adapter to the enrichment area on the response message shown below.For this example, a SOAP Adapter is dragged to the Drag and drop an enrichment source for the response message area. This action invokes the wizard for configuring the SOAP Adapter.
- Complete the pages of the wizard to configure the SOAP Adapter, then click Done. For this configuration, a different operation for selecting timestamp details is chosen.You are prompted with a dialog to delete any impacted response mappings that you previously configured for the response mapper. The response mapper requires updates because of the enrichment response adapter configuration you just performed.
- Click Yes. You recreate the response mappings later in these steps.
- Click Save.A SOAP Adapter icon and response enrichment mapper are added to the response side of the integration. Note that because you deleted the response mappings in the previous step, that icon is no longer shaded in green. This indicates that the response mapper requires configuration.
- Click the Response Enrichment Mapping icon between the trigger and invoke.
- Click the Create icon that is displayed. This invokes the mapper.
- Map source elements to target elements to include a timestamp with the response, then click Save when complete.The response enrichment mappings are as follows:
- Click the Response Mapping icon to invoke the mapper again. This mapper requires updates because of the enrichment response mapping you performed.
- Remap the source elements to target elements in the response mapper.The response mappings are updated. Note that a different source is now mapped to the original target of HelloResponse/Greeting.
- Click Close, then click Apply when complete.The integration with response enrichments added to the invoke (target) area looks as follows:
- Click Save, then click Close when complete.You are ready to activate the integration. While not demonstrated in this example, you can also configure the enrichment area on the request message shown below by dragging and dropping an adapter to the Drag and drop an enrichment source for the request message area. This invokes the adapter configuration wizard.
Delete Request and Response Enrichments
You can delete the request and response message enrichment point mappings added to an integration. After deleting the enrichment point mappings, the integration is returned to its original pre-enrichment state.
- On the Integration page, select the integration. The integration must not be active.
- Click the enrichment area on the request message or response message to delete.
- Select the Delete icon that is displayed.This deletes the mappings.
- Click Yes when prompted to confirm.Click Save, then click Close.
Create Routing Paths for Two Different Invoke Endpoints in Integrations
You can create an integration in which you define routing paths for two different invoke endpoints. During runtime, the expression filtering logic for the routing paths is evaluated and, based on the results, the path to one of the invoke endpoints is taken. If the filtering logic for neither routing path is satisfied, then neither invoke endpoint is contacted.
You define an expression filter on the first (upper) invoke endpoint.
You define either an ELSE condition or an expression filter on the second (lower) invoke endpoint.
During runtime, if the expression filtering logic for the first (upper) invoke endpoint evaluates to true, then the path to that invoke endpoint is taken. If the expression evaluates to false, then that invoke endpoint is skipped, and the path to the second (lower) invoke endpoint is taken through either an ELSE condition or an expression filter.
In addition to creating routing paths, you also define request and response (and optionally, enrichment) mappings on both invoke endpoints.
To create routing paths for two different invoke endpoints in integrations:
- On the Integrations page, select the integration in which to define a routing filter. Ensure that the integration is fully defined with trigger and invoke connections, business identifier tracking, and mappings.
- Click the Filter icon on the trigger side of the integration to create a filtering expression. Routing is created after any defined request enrichment and before the initial request mapping.
Description of the illustration routing_filter.png
- Click the Routing icon in the menu that is displayed.The Expression Builder is displayed for building routing expressions. The Expression Builder supports multiple source structures. You can create OR expressions using both source structures. You can also name expressions and calculate expression summaries with the Expression Summary icon. Elements and attributes with and without namespace prefixes are also supported.
You can filter the display of source structures by clicking the Filter link. This enables you to filter on whether or not fields are used and on the type of field (required fields, custom fields, or all fields). You can also select to filter both required and custom fields together.
- Drag an element from the Source area to the Expression field.
- Define a value.For this example, the ClassificationCode element is defined as equal to
Org. This means that
Orgis retrieved when this expression evaluates to true.
- If you want to calculate the expression, click the Expression Summary icon. This shows the summary of the expression and defines a more user-friendly, readable version of the expression you just created.
- If that name is not sufficiently user-friendly, copy and paste the expression to the Expression Name field for additional editing.
Description of the illustration ics_express_bld_names.png
- Click Close to save your changes.The defined expression is displayed above the integration. The Filter icon has now changed to indicate that an expression is defined.
Description of the illustration routing_expression_integ.png
- On the right side of the integration, click the Routing Drawer icon to display a graphical routing diagram with two potential paths. The first route that you just defined (the upper trigger and invoke) shows the defined expression above the line. The second route (the lower trigger and invoke) is displayed as a dotted line because it is not yet defined.You can activate the integration now if additional filtering is not required or define an additional routing filter. For this example, a second route is defined.
- Click the bull’s eye icon in the lower trigger icon to define routing on the second trigger and invoke route.
Description of the illustration routing_drawer2.pngThis refreshes the integration to display the lower trigger and invoke route in the integration. The trigger side remains as defined for the first route, but the invoke route is undefined.
- Click Show Palette to display the list of available connections and technologies.
- Drag an adapter to the invoke (target) area of the integration (for this example, an Oracle RightNow adapter is added).The Adapter Configuration Wizard is invoked.
- Configure the pages of the wizard for the Oracle RightNow adapter. For this example, the Get operation and Account business object are selected on the Operations page.
Description of the illustration routing_target_config.pngThe integration is now defined for the second invoke. You now need to create a filtering expression for the second invoke.
- Click the Filter icon to create a filtering expression.
- If no additional expression is required, click the E icon (to create an ELSE condition).
Description of the illustration routing_filter_edit.pngThis defines an ELSE condition for the second trigger and invoke. The ELSE condition is taken if the first route evaluates to false (that is ClassificationCode does not equal Org). You can toggle back and forth between the two trigger routes by clicking the adapter icon on the individual line. The line in blue is the currently visible invoke in the integration.
Description of the illustration routing_logic_else.png
- If you want to define your own expression filter for the second route instead of using the ELSE condition, perform the following steps:
- Click the Filter icon.
- Select Clear Expression to remove the ELSE condition.
- Click Yes when prompted to confirm.
- Click the Filter icon again and select the Edit icon to invoke the Expression Builder as you did in Step 3.
- Define an expression.
- Click Close to save your changes.Request and response mappings must now be defined.
- Click the Request Mapper icon to define the mapping.For this example, the following mapping is defined.
- Click the Response Mapper icon to define the mapping.For this example, the following mapping is defined.
Integration design in now 100% complete.
- Activate the integration.
Create Routing Expression Logic in Both Expression Mode and Condition Mode
You can create XPath expressions for routing conditions in two different user interface modes:
Expression mode: This mode provides an interface for creating and viewing the entire XPath expression.
Condition mode: This mode provides an easier-to-read interface to create and view XPath condition expressions. This mode is useful for business analysts who may be less experienced with XPath expressions.
Three levels of elements are loaded by default in the tree in the Source area. When you reach the third level, a Load more link is displayed. Click this link to display all the direct children of that element. Only base types are loaded automatically. To load the extended types of the base type, click the base type, which is identified by a unique icon. This invokes a menu of extended types that you can select to load one by one into the tree.
Description of the illustration ics_logic_express_base.png
Elements in the tree in the Source area that you have already dragged to an expression are identified by green checkboxes. These elements are displayed even if they are deeper than three levels in the tree.
You can search for an element that is not yet loaded in the tree by entering the name in the Find field and clicking the Search icon. This action loads that specific element into the tree.
This section provides an example of building an expression using both modes.
To create routing expressions in both expression mode and condition mode:
- Click the Filter icon on the source side of an integration to create a filtering expression.
Description of the illustration routing_filter.png
- Click the Routing icon in the menu that is displayed.The Expression Builder is displayed for building routing expressions. Expression mode is the default mode.
- In the field immediately below Expression Name, optionally enter a short description about the expression you want to build.
- Add an element from the Source area on the left side to the expression field immediately below the short description field. If needed, you can also add functions from the Components section.There are two ways to add an element to the expression field:
- Drag the element from the Source area.
- Select the row of the element in the Source area, then click the Move icon in the middle of the page to move the element.The expression for the selected element is displayed in the expression field (for this example, the expression for the Country element was added). The selected element is identified by green checkbox in the Source area.
- To the right of the added expression, define an operator and a value within single or double quotes (for this example,
= “USA”is defined).
- Click the Expression Summary icon to view a simplified, user-friendly version of the expression. Easy-to-read output is displayed.
Description of the illustration ics_logic_express_bldnw.png
Note:
To add additional elements to the expression, you can place your cursor in the exact location of the expression, select the row of an element in the Source area, and click the Move icon. These actions add that element to the exact location of your cursor.
You can drag an element to the exact location of your cursor in the expression, and the expression of the element is added to the cursor location, and not the location in which you drop the element.
You can drag an element on top of an existing expression element to replace it.
- In the upper right corner, click Condition Mode to view the expression you created in condition mode. Condition mode provides an easy-to-read interface for creating and viewing your expressions.
Note the following details about accessing condition mode:
Condition mode can only be accessed if the expression field is empty or completely defined with an expression that returns true or false. If you only partially define an expression (for example, you drag an element to the expression field, but forget to define expression logic and a value such as
= “USA”), you receive an error saying that you must provide a valid condition to access condition mode.
The Condition Mode button toggles to Expression Mode.
Note:At any time, you can click Expression Mode to view the entire XPath expression.
- Click the expression.
Description of the illustration ics_logic_express_add2.png
This refreshes the page to display icons for adding additional conditions and conditions groups. Groups enable you to combine multiple conditions into a single logical expression.
Description of the illustration ics_logic_express_build3.png
- Click the Add Condition icon (first icon) to add additional condition expressions.This creates an additional field for entering additional expression logic. The message Drag and drop or type here is displayed in this field.
Description of the illustration ics_logic_express_add_cond2.png
- Drag an element from the Source area to the first Drag and drop or type here field (for this example, the Country element is again added).
- Select an operator (for example, =, >,!=, and so on) and enter a value (for this example,
“Mexico”is added).
- From the Match list, select an option. This list is hidden until at least two conditions are defined.
Any of: Select if any of the added expressions must be true. This equates to an OR condition in the entire XPath expression shown in expression mode.
All of: Select if all expressions must be true. This equates to an AND condition in the entire XPath expression shown in expression mode.
- Select the Add Group icon (second icon) to group a series of conditions. This option enables you to build a number of conditions within a single group. The group is identified by the gray outline and the indentation.
- Add an element from the Source area.For this example:
The DisplayName element is added to the first Drag and drop or type here field.
The not equal operator (!=) is selected.
The Country element is added to the second Drag and drop or type here field.
- Click the Add Condition icon (first icon) to add an additional condition expression within the group.For this example:
The DisplayOrder element is added to the first Drag and drop or type here field.
The less than operator (<) is selected.
A value of
10is entered in the second Drag and drop or type here field.
- Continue building your group condition, as necessary.When complete, the expression is displayed. For this example, there are the conditions: if Country is USA OR Country is Mexico OR DisplayName does not equal country and DisplayCount is less than 10, the integration continues.
Description of the illustration ics_logic_express_add_gr3.png
- Click Expression Mode.Note the entire XPath expression and the expression summary at the bottom. The selected elements are displayed (no matter their level of depth in the tree) and identified by green checkboxes in the Source area.
Description of the illustration ics_logic_express_final2.png
- If you want, you can place your cursor in the XPath expression and edit it as necessary (for example, change
USAto
Canada), then click the Expression Summary icon to refresh the calculation. If you make an error when editing the XPath expression (for example, forget to add a double quote to a value), an error message is displayed.
- Click Save to view the expression in read-only mode. You can also click Done Editing at any time during the creation process to view the expression in read-only mode.
- Click Close to return to the integration. The user-friendly expression is displayed in the blue banner above the integration.
Delete Routing Paths
You can delete routing paths that have been created on different target endpoints in an integration.
Delete the routing path and expression filter.
Delete the endpoint and routing path, but retain the expression filter.
Deleting the Routing Path and Expression Filter
To delete the routing path and expression filter:
In the Integrations page, select the integration in which to delete a routing path.
Expand the Routing Drawer icon to display the diagram of routing paths.
Above the integration, select the routing path to delete.
Description of the illustration routing_select_delete.png
Click the Filter icon.
Select Delete Route from the menu that is displayed.
Description of the illustration routing_delete.png
Click Yes when prompted to confirm.
This action deletes the routing path, including the expression filter and the request mapping for the selected path. The diagram above the integration shows that the routing path is deleted.
Description of the illustration routing_delete_complete.png
Deleting the Endpoint and Routing Path
To delete the endpoint and routing path:
In the integration, click the target endpoint to delete.
Click Delete in the menu that is displayed.
Click Yes when prompted to confirm.
This action deletes the target endpoint and routing path. The diagram above the integration shows that the routing path is deleted. Within the integration, only the expression remains defined in the integration because it is not using anything from the deleted target endpoint.
Description of the illustration routing_delete_endpt.png
|
https://docs.oracle.com/en/cloud/paas/integration-cloud/integrations-user/creating-map-data-integrations.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Build a Simple Netty Application With and Without Spring
Build a Simple Netty Application With and Without Spring
In this article, we discuss how to build a Netty application with and without Spring to better understand just how much Spring abstracts for us.
Join the DZone community and get the full member experience.Join For Free
As an asynchronous, non-blocking input/output (NIO) framework, Netty is used for the rapid development of maintaining highly scalable protocol servers & clients. Building low-level network servers and clients is relatively straightforward with Netty. Developers can work on the socket level, such as creating original communication protocols between clients and servers.
Blocking and non-blocking unified APIs, amenable threading model, and SSL/TLS are all supported by Netty. All requests run asynchronously on an individual thread with a non-blocking server (the event loop shouldn’t be blocked by the function). This contradicts with a blocking server model, which usually uses a separate thread to run each request. Without the need for switching or creating threads when the load increases, the non-blocking model decreased overhead and faster development as traffic expands.
All of this power, however, comes at the cost of complexity. Non-blocking code is typically harder to read, to test, and to maintain, although this has improved greatly as the asynchronous paradigm has matured. Since Netty works at the socket level, it also requires a deeper understanding of the nuts and bolts of things like thread loops, byte buffers, and memory management.
The Netty.io team has done an admirable job of making Netty easy to use for all its power, but it’s still necessarily more complicated than higher-level libraries (such as Spring Boot WebFlux). So why use it?
Netty is designed to make the implementation of custom network protocols relatively easy. HTTP is great, but its a general-purpose protocol, basically well-suited to most things. But if you’re consistently passing custom, structured data back and forth between servers and clients (large files, streaming media, real-time game data, etc…), you can do better. Netty allows you to write your own network protocol tailored to your specific requirements, optimizing the traffic flow for your specific situation, without the unnecessary overhead of something like HTTP or FTP.
However, even if you’re not going to write your own custom TCP protocol, you can still use the power of Netty. Spring WebFlux is Spring’s answer to non-blocking and reactive programming. It’s an alternative to the traditional (blocking) Spring MVC architecture. By default, the Spring Boot WebFlux Starter runs on an embedded Netty server. In this configuration, you can think of WebFlux as a reactive, non-blocking HTTP application layer built on top of Netty’s NIO socket goodness.
In this tutorial, you are going to create a basic “Hello world” application in Netty. Next, you’re going to create the same “Hello world” application in Spring Boot WebFlux. Finally, you’re going to add OAuth 2.0 login to the application using Okta as the OAuth 2.0 provider.
Install the Project Dependencies
This project has a few required tools to install before you get started.
Java 11: This project uses Java 11. You can install OpenJDK via the instructions found on the OpenJDK website or using SDKMAN.
HTTPie: This is a simple command-line utility for making HTTP requests that you’ll use to test the REST application. It’s also beloved by Okta developers. Install per the instructions on their website.
Okta Developer Account: You’ll use Okta as an OAuth/OIDC provider to add OAuth2 login authentication to the application. Sign up for a free Okta developer account, if you haven’t already.
You should also go ahead and clone this blog’s GitHub repository.
git clone
The project contains three subdirectories, corresponding to the three sections of this tutorial:
netty-hello-world: a very basic example of how to create a Netty server
webflux-hello-world: how to create the same server in Spring WebFlux
webflux-oauth2login: an example of how to add OAuth2 login to a Spring WebFlux application
Use Netty to Build an HTTP Server
HTTP servers are application-layer implementations of the HTTP protocol (OSI Layer 7), so relatively high up in the internet stack. If you’re developing a REST API, you’re developing on top of an API that provides this implementation for you. By contrast, Netty doesn’t necessarily structure communication, provide session management, or even offer security like TLS. This is great if you’re building a super low-level networking application; however, perhaps not the best choice if you’re building a REST service.
Fortunately, the Netty API also provides some helper classes and functions that will allow us to easily integrate a higher level protocol like HTTP. In this part of the tutorial, you’ll use those to make a simple HTTP server.
Open the
netty-hello-world project in your favorite IDE or text editor.
First, take a look at the
src/main/java/com/okta/netty/AppServer.java file. This class is the entry point for the application and sets up the Netty server.
xxxxxxxxxx
package com.okta.netty;
...
public class AppServer {
private static final int HTTP_PORT = 8080;
public void run() throws Exception {
// Create the multithreaded event loops for the server
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
// A helper class that simplifies server configuration
ServerBootstrap httpBootstrap = new ServerBootstrap();
// Configure the server
httpBootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ServerInitializer()) // <-- Our handler created here
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
ChannelFuture httpChannel = httpBootstrap.bind(HTTP_PORT).sync();
// Wait until server socket is closed
httpChannel.channel().closeFuture().sync();
}
finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
new AppServer().run();
}
}
The most important line is
.childHandler(new ServerInitializer()), which creates
ServerInitializer and
ServerHandler and hooks into the Netty server.
Next, look at
src/main/java/com/okta/netty/ServerInitializer.java. This class configures the Netty channel that will handle our requests and connects it to the
ServerHandler.
xxxxxxxxxx
package com.okta.netty;
...
public class ServerInitializer extends ChannelInitializer<Channel> {
protected void initChannel(Channel ch) {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new HttpServerCodec());
pipeline.addLast(new HttpObjectAggregator(Integer.MAX_VALUE));
pipeline.addLast(new ServerHandler());
}
}
Finally, there is
src/main/java/com/okta/netty/ServerHandler.java. This is where the actual request is mapped, and the response is generated.
xxxxxxxxxx
package com.okta.netty;
...
public class ServerHandler extends SimpleChannelInboundHandler<FullHttpRequest> {
protected void channelRead0(ChannelHandlerContext ctx, FullHttpRequest msg) {
ByteBuf content = Unpooled.copiedBuffer("Hello World!", CharsetUtil.UTF_8);
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK, content);
response.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/html");
response.headers().set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes());
ctx.write(response);
ctx.flush();
}
}
In this class, notice that you must convert the response string to a byte buffer. You actually generate an HTTP response and set some headers directly. This is the application layer of the internet (OSI Layer 7). When you call
ctx.write(response), it sends the response as a byte stream over TCP. The Netty team has done a great job of hiding a ton of complexity from us while staying at a low-level transport protocol.
Test Your Netty App
To test this Netty app, from the project root directory
netty-hello-world, run:
xxxxxxxxxx
./gradlew run
Once the application finished loading, from a separate shell, use HTTPie to perform a GET request:
xxxxxxxxxx
$ http :8080
HTTP/1.1 200 OK
content-length: 12
content-type: text/html
Hello World!
That’s a simple HTTP server built in Netty. Next, you will climb the ladder of abstraction a rung and use Spring Boot and WebFlux to simplify things.
Say Hello to WebFlux on Netty
As I mentioned previously, WebFlux is a non-blocking alternative to Spring MVC. It supports reactive programming with its event-driven, asynchronous, and non-blocking approach to request handling. It also provides many functional APIs. Reactor, a reactive, server-side Java library developed in close collaboration with Spring, provides the reactive streams aspect of WebFlux. However, you could also use other reactive streams libraries.
Recall that, by default, the Spring Boot WebFlux starter runs on a Netty server. You’ll notice how much complexity Spring Boot hides from you in the next example.
The Spring Boot WebFlux project is located in the
webflux-hello-world sub-directory of the GitHub repository. It’s beguilingly simple.
Take a look at the
ReactiveApplication class. It’s the bare-bones, standard Spring Boot application class. It simply leverages the
public static void main() method and the
@SpringBootApplication to start the whole Spring Boot application framework.
src/main/java/com/okta/webflux/app/ReactiveApplication.java
xxxxxxxxxx
package com.okta.webflux.app;
...
public class ReactiveApplication {
public static void main(String[] args) {
SpringApplication.run(ReactiveApplication.class, args);
}
}
The
ReactiveRouter is a simple router class that links HTML endpoints with handler methods. You can see that it uses dependency injection to pass the
ReactiveHandler to the router bean, which defines a single endpoint for the
/ route.
src/main/java/com/okta/webflux/app/ReactiveRouter.java
xxxxxxxxxx
package com.okta.webflux.app;
...
public class ReactiveRouter {
public RouterFunction<ServerResponse> route(ReactiveHandler handler) {
return RouterFunctions
.route(RequestPredicates
.GET("/")
.and(RequestPredicates.accept(MediaType.TEXT_PLAIN)), handler::hello);
}
}
The
ReactiveHandler is similarly simple. It defines one handler function that returns plain text. The
Mono<ServerResponse> return type is a special type for returning a stream of one element. Take a look at the Spring Docs on Understanding Reactive types to learn more about return types. If you’re used to Spring MVC, this will likely be one of the more unfamiliar aspects of WebFlux.
xxxxxxxxxx
package com.okta.webflux.app;
...
public class ReactiveHandler {
public Mono<ServerResponse> hello() {
return ServerResponse
.ok()
.contentType(MediaType.TEXT_PLAIN)
.body(BodyInserters.fromObject("Hello world!"));
}
}
Open a shell and navigate to the
webflux-hello-world sub-directory of the project.
Run the project using:
./gradlew bootRun.
Open another shell to test the endpoint with
http :8080.
xxxxxxxxxx
HTTP/1.1 200 OK
Content-Length: 12
Content-Type: text/plain
Hello world!
See how much simpler Spring Boot was to use than Netty?
Create an OpenID Connect (OIDC) Application
Next, you will secure the application using OAuth 2.0 login. This might sound complicated, but don’t worry. Spring and Okta have conspired to make it pretty darn simple!
Okta is a SaaS (software-as-service) authentication and authorization provider. We provide free accounts to developers so you can develop OIDC apps without fuss. Head over to developer.okta.com and sign up for an account.
After you’ve verified your email, log in and perform the following steps (if it’s your first time to log in, you may need to click the yellow Admin button to get to the developer dashboard):
- Go to Application > Add Application.
- Select application type Web and click Next.
- Give the app a name. I named mine “WebFlux OAuth”.
- Under Login redirect, URIs change the value to. The rest of the default values will work.
- Click Done.
Take note of the Client ID and Client Secret at the bottom. You’ll need them in a moment.
Secure Your App with OAuth 2.0
Once you’ve created the OIDC application on Okta, you need to make a few updates in the project. If you want to skip ahead the finished project for this part of the tutorial can be found in the
webflux-oauth2login sub-directory, but I’m going to show you how to modify the
webflux-hello-world to add login.
First, add the Okta Spring Boot Starter to the Gradle build file. We’ve worked hard to make this as easy as possible, and the Okta Spring Boot Starter simplifies OAuth configuration. Take a look at the GitHub project for the starter for more info.
Add the following dependency to the dependency block of your
build.gradle file:
xxxxxxxxxx
dependencies {
...
implementation 'com.okta.spring:okta-spring-boot-starter:1.3.0'
}
Next, add the following properties to the
src/main/resources/application.properties file. You need to replace the values in brackets with your own Okta domain and client ID.
You can find your Issuer URI by opening your Okta developer dashboard and going to API > Authorization Servers and looking in the table at the default server. The client ID and secret come from the OIDC application you created just a moment ago.
xxxxxxxxxx
okta.oauth2.issuer={yourIssuerUri}
okta.oauth2.client-id={yourClientId}
okta.oauth2.client-secret={yourClientSecret}
Now run the application:
./gradlew bootRun.
Either log out of your Okta developer account or use an incognito window and navigate to (in a browser):.
You’ll be directed to log in using your Okta account.
Once you’ve logged in, you’ll be redirected back to the app. Yay - success!
Learn More About Netty, Spring Boot, and OAuth 2.0
In this tutorial, you created a basic “Hello world” application using Netty. You saw how Netty is a super-powerful framework for creating TCP and UDP network protocols. You saw how it supports non-blocking IO, and how Spring WebFlux builds on top of Netty to provide a reactive, non-blocking HTTP application framework. You then built a “Hello world” application in WebFlux, after which you used Okta as an OAuth 2.0 / OIDC provider to add OAuth 2.0 login to the application.
You can see the completed code for this tutorial on GitHub at oktadeveloper/okta-netty-webflux-example.
In addition to WebFlux, some powerful networking frameworks are built on top of Netty. Apple recently open-sourced ServiceTalk, a reactive microservices client/server library that supports HTTP, HTTP/2, and gRPC. There’s also Armeria, an open-source asynchronous HTTP/2 RPC/REST client/server library built on top of Java 8, Netty, Thrift, and gRPC. Its primary goal is to help engineers build high-performance asynchronous microservices.
If you’re interested in learning more about Spring Boot, Spring WebFlux, and OAuth 2.0, check out these useful tutorials:
- Build Reactive APIs with Spring WebFlux
- What the Heck is OAuth?
- Identity, Claims, & Tokens – An OpenID Connect Primer, Part 1 of 3
- Build a Secure API with Spring Boot and GraphQL
If you have any questions about this post, please add a comment below. For more awesome content, follow @oktadev on Twitter, or subscribe to our YouTube channel!
A Quick Quide to Java on Netty was originally published on Okta Developer Blog on November 25, 2019.
Published at DZone with permission of Andrew Hughes , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/build-a-simple-netty-application-with-and-without
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Elements
Introduction
This section covers the elements types for our halfedge mesh, as well as the traversal and utility functions that they offer.
#include "geometrycentral/surface/halfedge_mesh.h"
Note
In the most proper sense, these element types are really “handles” to the underlying element. They refer to a particular element, but the
Vertex variable in memory is not really the mesh element itself, just a temporary reference to it.
For instance, it is possible (and common) to have multiple
Vertex variables which actually refer to the same vertex, and allowing a
Vertex variable to go out of scope certainly does not delete the vertex in the mesh.
However, the semantics are very natural, so for the sake of brevity we call the type simply
Vertex, rather than
VertexHandle (etc).
Additionally, see navigation for iterators to travese adjacent elements, like
for(Vertex v : face.adjacentVertices()).
Construction
Element types do not have constructors which should be called by the user. Instead, the element will always be created for you, via one of several methods, including:
- Iterating through the mesh
for(Vertex v : mesh.vertices())
- Traversing from a neighbor element
Face f = halfedge.face()
- Iterating around an element
for(Halfedge he : vertex.outgoingHalfedges())
Adding a new element to a mesh is covered in the mutation section.
Comparison & Hashing
All mesh elements support:
- equality checks (
==,
!=)
- comparions (
<,
>,
<=,
>=, according to the iteration order of the elements)
- hashing (so they can be used in a
std::unordered_map)
Vertex
A vertex is a 0-dimensional point which serves as a node in the mesh.
Traversal:
Halfedge Vertex::halfedge()
Returns one of the halfedges whose tail is incident on this vertex.
If the vertex is a boundary vertex, then it is guaranteed that the returned halfedge will be the unique interior halfedge along the boundary. That is the unique halfedge such that
vertex.halfedge().twin().isInterior() == false.
Corner Vertex::corner()
Returns one of the corners incident on the vertex.
Utility:
bool Vertex::isBoundary()
Returns true if the vertex is along the boundary of the mesh.
See boundaries for more information.
size_t Vertex::degree()
The degree of the vertex, i.e. the number of edges incident on the vertex.
size_t Vertex::faceDegree()
The face-degree of the vertex, i.e. the number of faces incident on the vertex. On the interior of a mesh, this will be equal to
Vertex::degree(), at the boundary it will be smaller by one.
Halfedge
A halfedge is a the basic building block of a halfedge mesh. As its name suggests, the halfedge is half of an edge, connecting two vertices and sitting on on side of an edge in some face. The halfedge is directed, from its tail, to its tip. Our halfedges have a counter-clockwise orientation: the halfedges with in a face will always point in the counter-clockwise direction, and a halfedge and its twin (the neighbor across an edge) will point in opposite directions.
Traversal:
Halfedge Halfedge::twin()
Returns the halfedge’s twin, its neighbor across an edge, which points in the opposite direction.
Calling twin twice will always return to the initial halfedge:
halfedge.twin().twin() == halfedge.
Halfedge Halfedge::next()
Returns the next halfedge in the same face as this halfedge, according to the counter-clockwise ordering.
Vertex Halfedge::vertex()
Returns the vertex at the tail of this halfedge.
Edge Halfedge::edge()
Returns the edge that this halfedge sits along.
Face Halfedge::face()
Returns the face that this halfedge sits inside.
Note that in the case of a mesh with boundary, if the halfedge is exterior the result of this function will really be a boundary loop. See boundaries for more information.
Corner Halfedge::corner()
Returns the corner at the tail of this halfedge.
Fancy Traversal:
Halfedge Halfedge::prevOrbitFace()
Returns the previous halfedge, that is the halfedge such that
he.next() == *this. This result is found by orbiting around the shared face.
Because our halfedge mesh is singly-connected, this is not a simple O(1) lookup, but must be computed by orbiting around the face. Be careful: calling
he.prevOrbitFace() on each exterior halfedge can easily lead to O(N^2) algorithm complexity, as each call walks all the way around a a boundary loop.
Generally this operation can (and should) be avoided with proper code structure.
Halfedge Halfedge::prevOrbitVertex()
Returns the previous halfedge, that is the halfedge such that
he.next() == *this. This result is found by orbiting around the shared vertex.
Because our halfedge mesh is singly-connected, this is not a simple O(1) lookup, but must be computed by orbiting around the vertex. Be careful: calling
he.prevOrbitVertex() in a loop around a very high-degree vertex can easily lead to O(N^2) algorithm complexity, as each call walks all the way around the vertex.
Generally this operation can (and should) be avoided with proper code structure.
Utility:
bool Halfedge::isInterior()
Returns true if the edge is interior, and false if it is exterior (i.e., incident on a boundary loop).
See boundaries for more information.
Edge
An edge is a 1-dimensional element that connects two vertices in the mesh.
Traversal:
Halfedge Edge::halfedge()
Returns one of the two halfedges incident on this edge. If the edge is a boundary edge, it is guaranteed that the returned edge will be the interior one.
Utility:
bool Edge::isBoundary()
Returns true if the edge is along the boundary of the mesh. Note that edges which merely touch the boundary at one endpoint are not considered to be boundary edges.
See boundaries for more information.
Face
A face is a 2-dimensional element formed by a loop of 3 or more edges. In general, our faces can be polygonal with d \ge 3 edges, though many of the routines in geometry central are only valid on triangular meshes.
Traversal:
Halfedge Face::halfedge()
Returns any one of the halfedges inside of this face.
BoundaryLoop Face::asBoundaryLoop()
Reinterprets this element as a boundary loop. Only valid if the face is, in fact, a boundary loop. See boundaries for more information.
Utility:
bool Face::isBoundaryLoop()
Returns true if the face is really a boundary loop. See boundaries for more information.
bool Face::isTriangle()
Returns true if the face has three sides.
size_t Face::degree()
Returns the number of sides in the face. Complexity O(d), where d is the resulting degree.
Boundary Loop
A boundary loop is a special face-like element used to represent holes in the mesh due to surface boundary. See boundaries for more information.
Traversal:
Halfedge BoundaryLoop::halfedge()
Returns any one of the halfedges inside of the boundary loop.
Utility:
size_t BoundaryLoop::degree()
Returns the number of sides in the boundary loop. Complexity O(d), where d is the resulting degree.
Corner
A corner is a convenience type referring to a corner inside of a face. Tracking corners as a separate type is useful, because one often logically represents data defined at corners.
Traversal:
Halfedge Corner::halfedge()
Returns the halfedge whose tail touches this corner. That is to say,
corner.halfedge().vertex() == corner.vertex().
Vertex Corner::vertex()
Returns the vertex which this corner is incident on.
Face Corner::face()
Returns the face that this corner sits inside of.
|
https://geometry-central.net/surface/halfedge_mesh/elements/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Java, J2EE & SOA Certification Training
- 37k Enrolled Learners
- Weekend
- Live Class
Advent of Java took the programming world by storm and the major reason for that is the number features it brought along. In this article we would be discussing Constructor Overloading in Java. Following pointers will be discussed in this article,
So let us get started then,
A constructor is a block of code used to create object of a class. Every class has a constructor, be it normal class or abstract class. A constructor is just like a method but without return type. When there is no any constructor defined for a class, a default constructor is created by compiler.
Example
public class Student{ //no constructor private String name; private int age; private String std; // getters and setters public void display(){ System.out.println(this.getName() + " " + this.getAge() + " " + this.getStd()); } public static void main(String args[]){ // to use display method of Student class, create object of Student Student student = new Student(); // as we have not defined any constructor, compiler creates default constructor. so that student.display(); } }
In above program, default constructor is created by compiler so that object is created. It is a must to have constructor.
This brings us to the next it of this article on Constructor overloading In Java.
In above example Student object can be created with default constructor only. Where all other attributes of student is not initialized. But there can be certain other constructors, which is used to initialize the state of an object. for e.g –
public class Student{ //attributes String name; int age; String std; //Constructors public Student(String name){ // Constructor 1 this.name = name; } public Student(String name, String std){ // Constructor 2 this.name = name; this.std = std; } public Student(String name, String std, int age){ // Constructor 3 this.name = name; this.std = std; this.age = age; } public void display(){ System.out.println(this.getName() + " " + this.getAge() + " " + this.getStd()); } public static void main(String args[]){ Student stu1 = new Student("ABC"); stu1.display(); Student stu2 = new Student("DEF", "5-C"); stu2.display(); Student stu3 = new Student("GHI", "6-C", 12); stu3.display(); } }
This brings us to the next it of this article on Constructor overloading In Java.
this() reference can be used inside parameterized constructor to call default constructor implicitily. Please note, this() should be the first statement inside a constructor.
Example
public Student(){} // Constructor 4 public Student(String name, String std, int age){ // Constructor 3 this(); // will call the default constructor. *If it is not the first statement of constructor, ERROR will occur* this.name = name; this.std = std; this.age = age;
Note
Thus we have come to an end of this article on ‘Constructor overloading.
|
https://www.edureka.co/blog/constructor-overloading-in-java/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
#include <PointModel.h>
to activate collision on both side of the point model (when surface normals are defined on these points)
activate computation of normal vectors (required for some collision detection algorithms)
Link to be set to the topology container in the component graph.
Display Collision Model Points free position(in green).
|
https://www.sofa-framework.org/api/master/sofa/html/classsofa_1_1component_1_1collision_1_1_point_collision_model.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
On Thu, 11 Mar 2004 21:42:04 -0800, "H. J. Lu" <hjl@lucon.org> wrote: >which means it will be discarded if the driver is builtin. If >xxx_remove may be discarded, please make sure there is > >#ifdef MODULE > remove: xxx_remove, >#endif > >so that xxx_remove won't be included when the driver is builtin. remove: __devexit_p(xxx_remove), is the correct method. The pointer is required for CONFIG_MODULE _or_ CONFIG_HOTPLUG, otherwise it must be set to NULL. __devexit_p() does all that. - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Fri Mar 12 01:05:48 2004
This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:24 EST
|
http://www.gelato.unsw.edu.au/archives/linux-ia64/0403/8788.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Add Advanced Search to a Project
Adding Advanced Search to a Project
Set up a Bloomreach Experience Manager project.
Add the following dependencies to cms/cms-dependencies/pom.xml in your project:
>
Configure the Document Types Filter Dropdown
Configure which document types can be selected in the filter section of the search perspective:
/hippo:configuration/hippo:frontend/cms/cms-advanced-search/genericFilters: document.type.namespaces: [ 'myproject' ] document.type.excluded: [ 'myproject:basedocument' ]
The above example includes all document types in the myproject namespace, except myproject:basedocument.
Note that when no document types are selected in the filter, the search results will include all document types regardless of the above configuration.
Configure Inclusion of Subtypes
By default, search results do.
Configure Character Limit of Search Input
Since 13.2.0, the following property of type Long is available to limit the number of characters that is allowed in the main search input of the search perspective:
/hippo:configuration/hippo:frontend/cms/cms-static/advancedSearchPerspective: search.input.maxlength: 100
If the property is absent or does not have a positive value, no limit is set.
|
https://documentation.bloomreach.com/13/library/enterprise/enterprise-features/advanced-search/adding-advanced-search-to-a-project.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
.
Interview Questions on Java
What if the main method is declared as private?
The[])?
main(..) is the first method called by java environment when a program is executed so it has to accessible from java environment. Hence the access specifier has to be public.() ?
Or
what is difference between == and equals
Or
Difference between == and equals method
Or
What would you use to compare two String variables – the operator == or the method equals()?
Or.
Output
== comparison : true
== comparison : true
Using equals method : true
false
Using equals method : true
What if the static modifier is removed from the signature of the main method?
Or?
Or
What is final, finalize() and finally?
Or
What is finalize() method?
Or
What is the difference between final, finally and finalize?
Or?
Global variables are globally accessible. Java does not support globally accessible variables due to following reasons:
- The global variables breaks the referential transparency
- Global variables creates collisions in namespace.
How to convert String to Number in java program?
The valueOf() function of Integer class is is used to convert string to Number. Here is the code example:
String numString = “1000″;
Or.
Example?
Or
What is the difference between public, private, protected and default Access Specifiers?
Or?
Or
What are class variables?
Or
What is static in java?
Or.
There are several classes like String which we use in program but we do not import it. Why?
String is a part of “java.lang” package and this package gets loaded by default by the Java Virtual Machine so we need not to import it. If we import it explicitly, there is no harm in doing that. ‘get’ and ‘set’.
What is the difference between JDK and JRE and JVM?.
Explain the flow of writing the java code to run?
We develop the java program using Java development kit (JDK) and then compile it using “javac” command which is also a part of JDK. Result of javac command is the .class file which is a byte code and is platform independent. When we say byte code is platform independent, it means that we can use the same .class file on any operating system. When it comes to run, we use java command which is a part of JRE. JRE processes the byte code and result is being used by JVM which converts the byte code to machine readable code.
What are the environment variables required to be set to run a java program?
- JAVA_HOME- set it to the root location of java
- PATH- set it to the bin directory of java location.
What are some of the advantages of Java?
- Java is an Object Oriented language.
- Java comes with built-in support of several features like garbage collection, multi threading, socket programming etc.
- Java is platform independent.
What are the key differences between C and Java?
There are several differences between java and C. Key differences are-
- Java does not support multiple inheritance instead it supports multi-level inheritance which means in java any class cannot inherit to multiple classes.
- There is no concept of pointers in java.
- Java does not support unions and structures and destructors.
- Java includes built-in support of memory management via garbage collections where as in C, developer has to take care of it.
What is POJO and what are its advantages?
POJO stands for Plain Java Object which promotes encapsulation and is always recommended. POJO has certain rules-
- All member variables should be declared as private so that they are not accessible to the outer world directly.
- Corresponding to each member variable, there will be a getter/setter public method which will be used to get or set the values.
- Naming convention of getter methods will be getXXX() where XXX is the name of variable with first character in upper case.
- Naming convention of setter methods will be setXXX() where XXX is the name of variable with first character in upper case.
The biggest advantage of POJO is encapsulation. Let’s take an example.
We have one class Account which has a member variable “balance”. If we have directly exposed this variable then other code can set it as negative value which is not correct. But we can handle it via setter methods and can have a check in setter to set it 0 if any code is trying to set negative value.
Can I change the argument name of main method to something else?
Yes we can change. Its only we cannot change the signature. Until the argument type is array of String we can use any name. Below is the example of valid main method.
public static void main(String[] xyz)
Can we have a main method in multiple classes?
Yes we can have main method in multiple classes. Class whose name we have passed while running the application will be executed. For example we have two classes ClassA and ClassB and both are having main() method than if we call java ClassA , main method of ClassA will be executed and on calling java ClassB, main of ClassB will be executed.
If we do not provide any arguments to main method while running from command prompt, what will be the value of string array?
It will be an empty array and not null. To validate this, we can simply print the length of argument using args.length and it will print 0. If it was a null, we would have got Null pointer exception.
What will happen if we import the same package multiple times?
Program will compile and run successfully. Internally JVM loads it only once.
Will using a * in import statement import all child packages?
No. * will import all the classes of the given package and will not import classes of child packages. For example, we have below 4 classes
- com.example.A
- com.example.B
- com.example.java.C
- com.example.java.D
using a import com.example.* will import com.example.A and com.example.B and not the classes of java child package.
What will happen if we remove the “static” keyword from main method?
Code will compile fine but on running we will get “NoSuchMethodError” because JVM will look for static method as running static method does not require any instance of a class.
Name the class which is the parent of every class in Java?
java.lang.Object
What will happen if I write multiple public class in same java file?
Java file will not compile as we can have only one public class in Java file. Rest all classes have to be non-public. Also the name of java file has to be same as that of public class.
Is Java pure Object Oriented programming language?
No, Java is not pure objected oriented programming language because it does support 8 primitive data types (char, byte, int, long, double, float, short, boolean).
Explain different primitive data types of java?
There are 8 primitive data types supported by java.
- byte- it is 8 bit signed and its default value is 0 and minimum value is -128 (-2^7) and maximum value is 127 (2^7-1)
- short- it is 16 bit signed its default value is 0 and minimum value is -32768 (-2^15) and maximum value is 32767 (2^15-1)
- int- it is 32 bit signed with default value as 0. Minimum value is - 2147483648 (-2^31) and maximum value is - 2147483647 (2^31-1)
- long- it is 64 bit signed with default value as 0. Minimum value is -9223372036854775808 (-2^63) and maximum value is 9223372036854775807 (2^63-1)
- float- is 32-bit IEEE 754 floating point with default value as 0.0f
- double- is 64-bit IEEE 754 floating point with default value as 0.0d
- boolean- represents one bit and default value as false. Memory consumed by boolean is platform dependent.
- char- is 16 bit Unicode character with minimum value as 0 (\u0000) and maximum value as 65535 (\uffff)
|
http://www.wideskills.com/java-interview-questions/java-interview-questions
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
With the recommendation of pybind11, I am trying to call python libs in my C++ program these days(Ultimate goal: use pickle to read data in python, and pass them to C++). I used Visual Studio 2019 and python3.7(Anaconda Version) to do this job. Following the tutorial in pybind11 Doc, I practiced the simplest demo like that:
#pragma comment(lib, "python37.lib") #pragma comment(lib, "python3.lib") #include <iostream> #include <pybind11/pybind11.h> using namespace std; namespace py = pybind11; int main() { py::object Decimal = py::module::import("decimal").attr("Decimal"); }
But the VS2019 told me this:
Unhandled exception at 0x00007FFA9A5384E7 (python37.dll) in hello_Call_py_From_C.exe: 0xC0000005: Access violation reading location 0x0000000000000025.
The include and lib directories I used are :
Anaconda3\include, pybind11\include; Anaconda3\libs;
When I continued to debug, it told me "unicodeobject.c not found", like this:
To make it clear, I searched the derecotories in my PC, and all the 'unicodeobject' is '.h' rather than '.c', which are mostly in the Anaconda Folder rather than pybind11:
I continued to check if decimal is not a illegal pac in python, but in fact it is:
So I am more puzzled and don't know who to be blamed. What't more, I would also appreciate if you can guide me to another wiser way to tackle this task. Thank you!
User contributions licensed under CC BY-SA 3.0
|
https://windows-hexerror.linestarve.com/q/so59269781-Fail-when-Extending-Python-with-C
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
CompTIA Training Classes in Potsdam, Germany
Learn CompTIA in Potsdam, Germany and surrounding areas via our hands-on, expert led courses. All of our classes either are offered on an onsite, online or public instructor led basis. Here is a list of our current CompTIA related training offerings in Potsdam,
- Introduction to Python 3.x
27 April, 2020 - 30 April, 2020
- CompTIA Security+ (Exam SY0-501)
20 April, 2020 - 24 April, 2020
- RED HAT SATELLITE V6 (FOREMAN/KATELLO) ADMINISTRATION
6 July, 2020 - 9 July, 2020
- ASP.NET Core MVC
27 July, 2020 - 28 July, 2020
-
In Python, the following list is considered False:
False, None, 0, 0.0, "",'',(),{},[]!
Invoking an external command in Python is a two step process:
from subprocess import call call(["ls","…
|
https://www.hartmannsoftware.com/Training/CompTIA/Potsdam-Germany
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
#include <knuminput.h>
Detailed Description
An input control for real numbers, consisting of a spinbox and a slider.
- Deprecated:
- since 5.0, use QSpinBox instead
KDoubleNumInput combines a QSpinBox and optionally a QSlider with a label to make an easy to use control for setting some float parameter. This is especially nice for configuration dialogs, which can have many such combinated controls.
The slider is created only when the user specifies a range for the control using the setRange function with the slider parameter set to "true".
A special feature of KDoubleNumInput, designed specifically for the situation when there are several instances in a column, is that you can specify what portion of the control is taken by the QSpinBox (the remaining portion is used by the slider). This makes it very simple to have all the sliders in a column be the same size.
- See also
- KIntNumInput
Definition at line 455 of file knuminput.h.
Constructor & Destructor Documentation
Constructs an input control for double values with initial value 0.00.
Definition at line 701 of file knuminput.cpp.
Constructor.
- Parameters
-
Definition at line 709 of file knuminput.cpp.
destructor
Definition at line 728 of file knuminput.cpp.
Constructor.
the difference here below instead
Definition at line 718 of file knuminput.cpp.
Member Function Documentation
- Returns
- number of decimals.
- See also
- setDecimals()
You need to overwrite this method and implement your layout calculations there.
See KIntNumInput::doLayout and KDoubleNumInput::doLayout implementation for details.
Definition at line 882 of file knuminput.cpp.
- Returns
- the value of the exponent use to map the slider to the spin box.
- Returns
- the maximum value.
- Returns
- the minimum value.
- Returns
- the prefix.
- See also
- setPrefix()
- Returns
- the reference point for relativeValue calculation
- Returns
- the current value in units of referencePoint.
This is an overloaded member function, provided for convenience.
It essentially behaves like the above function.
Contains the value in units of referencePoint.
Specifies the number of digits to use.
Definition at line 1045 of file knuminput.cpp.
- Parameters
-
Definition at line 1081 from KNumInput.
Definition at line 1065 of file knuminput.cpp.
Sets the maximum value.
Definition at line 982 of file knuminput.cpp.
Sets the minimum value.
Definition at line 971 of file knuminput.cpp.
Sets the prefix to be displayed to
prefix.
Use QString() to disable this feature. Note that the prefix is attached to the value without any spacing.
- See also
- setPrefix()
Definition at line 1038 of file knuminput.cpp.
- Parameters
-
Definition at line 912 of file knuminput.cpp.
Sets the reference Point to
ref.
It
ref == 0, emitting of relativeValueChanged is blocked and relativeValue just returns 0.
Definition at line 905 of file knuminput.cpp.
Sets the value in units of referencePoint.
Definition at line 895 of file knuminput.cpp.
- Returns
- the step of the spin box
Definition at line 998 of file knuminput.cpp.
- Parameters
-
Definition at line 934 of file knuminput.cpp.
Sets the special value text.
If set, the spin box will display this text instead of the numeric value whenever the current value is equal to minVal(). Typically this is used for indicating that the choice has a special (default) meaning.
Definition at line 1057 of file knuminput.cpp.
Sets the suffix to be displayed to
suffix.
Use QString() to disable this feature. Note that the suffix is attached to the value without any spacing. So if you prefer to display a space separator, set suffix to something like " cm".
- See also
- setSuffix()
Definition at line 1031 of file knuminput.cpp.
Sets the value of the control.
Definition at line 890 of file knuminput.cpp.
- Returns
- the step of the spin box
- Returns
- the string displayed for a special value.
- See also
- setSpecialValueText()
- Returns
- the suffix.
- See also
- setSuffix()
- Returns
- the current value.
Emitted every time the value changes (by calling setValue() or by user interaction)..
|
https://api.kde.org/frameworks/kdelibs4support/html/classKDoubleNumInput.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
lp:~bigdata-dev/charms/xenial/rsyslog-forwarder-ha/trunk
- Get this branch:
- bzr branch lp:~bigdata-dev/charms/xenial/rsyslog-forwarder-ha/trunk
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Juju Big Data Development
- Status:
- Development
Recent revisions
- 25. By Kevin W Monroe on 2016-10-27
remove sitepackages=True (it is not needed, and can cause conflicts if user has py2 flake8 installed)
- 24. By Kevin W Monroe on 2016-10-26
use explicit charm name in amulet test (bundletester confused rsyslog with rsyslog-fwrd without this)
- 23. By Kevin W Monroe on 2016-10-26
use our xenial rsyslog in the test; better metadata tags
- 22. By Kevin W Monroe on 2016-10-26
rework unit test for py3 and check that the rsyslogd process is actually running
- 21. By Kevin W Monroe on 2016-10-26
adjust test targets and setup for py3
- 20. By Kevin W Monroe on 2016-10-26
sync latest charmhelpers
- 19. By Kevin W Monroe on 2016-09-29
return (dont die) if a syslog aggregator relation already exists. this can happen if multiple principal charms are colocated on the same machine. in this case, rsyslogd will already be configured on the machine, so just log the event and move on.
- 18. By Kevin W Monroe on 2016-09-26
simplify deployment test and move to xenial
-
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:charms/rsyslog-forwarder-ha
|
https://code.launchpad.net/~bigdata-dev/charms/xenial/rsyslog-forwarder-ha/trunk
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Agenda
See also: IRC log
<wseltzer> [adjourned]
<trackbot> Date: 28 October 2014
<bhill2> Meeting: WebAppSec WG TPAC 2014 F2F Day 2
<bhill2> zakim aabb is [SalonB]
<rigo> scribenick:rigo
BH: scheduled until 16:15
Introduction:
-> see minutes from 27 Oct
BH: Presentation of the Agenda
Ein weiterer Ort an der Grenze zwischen Niedersachsen und Sachsen-Anhalt. 1958 wurde auf der Westseite in Zicherie ein Stein mit der Aufschrift "Deutschland ist unteilbar" an der innerdeutschen Grenze aufgestellt. Er steht heute als Gedenkstein in unmittelbarer Nähe zum Ortsschild des Nachbarortes Böckwitz.
<scribe> Agenda:
<bhill2> regrets, Mike West
survey results for rechartering
<bhill2>
Credential management API, almost consensus, two people volunteered to work on it
BH: give more structure so that browsers can make more out of password fields
<bhill2> we are back...
<dveditz> we're back?
we're back
<bhill2> general agreement
<bhill2> JeffH has concerns about scope, potential impact of other efforts, e.g. FIDO, or related work
<bhill2> ... coming from WebCrypto workshop to replace passwords
<bhill2> he will respond on list
dveditz: should work on credentials to help with passwords
BH: client side is not compromising really data, but protecting server side data
tanvi: write only passwords
(Lupin) iframe all of login forms and scrape them to see if
secure
... http-page, write only type passwords
dveditz: argued against it for 10 years
BH: reasonable agreement to work on credentials
<dveditz> I argued against password auto-fill without any user interaction
BH: any objections
=> silence
BH: write only form elements will not be done
<dveditz> (because then an attacker can mass-harvest passwords potentially)
<tanvi> Automated Password Extraction Attack on Modern Password Managers,
BH: no-sniff, header X-content:
no-sniff. Content sniffing rules that A.Barth worked on in IETF
habe been picked up by WhatWG
... defined in mimesniff WhatWG, seems like already done in WhatWG
... no reason to duplicate
dveditz: everyone is using that, so why bother
<deian> tanvi: a more recent paper on password manaagers:
<tanvi> deian: thanks!
dveditz: mainly documenting what IE (invented) and chrome is doing already
BH: agree
... unanimous agreement to work on CSP next generation
... dom API, service workers, explicit about fetch etc.
... no objections in the room
... suborigins (joe will talk about it later)
... sandboxed cross origin workers, two votes against
... concerned about breaking google analytics and FB like button. But also vector for large scale attack. manipulating jquery e.g.
... we are addressing with subresource integrity, but also dynamic scripts
... difficult to apply subresource integrity to that. Interested in ways to secure here
... post message channel, well known interface. Much smaller attack surface. Web components is not taking that approach
... iframes are heavyweight. If include 30 of those scripts, it becomes to heavy
... perhaps create a worker, static interface that is identified with subresrouce integ. and some dynamic part
RW: perhaps looking into provenance
dveditz: interested in this, but
not if it is duplicated elsewhere
... the thing I read tried to get around the iframe weight
... ECMA 6 or 7 will come up with containers
deian: we are doing something
similar, container, context X, then extension components within
this worker
... little bit more powerful than just ecmascript
BH: put in charter, script sandboxing
RW: STREWS is planning to work on sandboxing, so would be good to have it in the charter
BH: no objection, seems to be reasonable to work on
bhill2: WebCrypto Workshop,
Virginie will summarize on Thursday. Some of it could end up
here
... concerns about bringing it into WebAppSec because of the more controversial IPR around hardware
... don't want to derail CSP
... can follow on public-web-security for chartering
RW: good idea to have only a strong dependency
bhill2: yes, whether in WEb
Crypto or new group, will insist on dependency
... summarizing the points for the charter... (too fast for scribe)
slides will be sent out later
Current status of COWL: portin to latest FF & Chromium
deian: how to do enforcement? CSP allows to control where context can disseminate data
ISSUE-69?
<trackbot> ISSUE-69 -- Consider directives to manage postMessage and external navigation of iframes -- raised
<trackbot>
deian: COWL has intersection with
issue 69
... creates a kind of reverse sandbox
...
<bhill2> question: why not add this to the javascript engine for label propagation
ckerschb: label propagation, how to prevent an attacker to inject their own labels
bhill2: what about cover channels, side channels?
deian: once you enable mode, you
get post message
... it is an opt-in system, doesn't break existing web
<bhill2> deian: difficult to use by programmers, don't need to label everything, get unlimited labeled blobs as needed (in response to last question)
bhill2: what is it look like for sites distributing widgets, third party to include your resource, make it as easy as possible
deian: self protection?
bhill2: yes, like button ... the ability to copy-paste a snip of javascript into pages
deian: this should be the case,
the Lworkers does that. fundamental principle is data source is
trusted.
... widget can always send to parent
bhill2: ship same app ot browser 8 (not support) and 9 (that supports)?
deian: you give up security as you can not know whether it works
bhill2: so have to understand the analytical message and react upon it
<tanvi> server knows if browser supports cowl
deian: yes, prot should tell whether supports COWL
MS: dozens of checks?
deian: dozens, being able to
??
... implemented a few larger scales
<bhill2> hi freddy, we are just doing some questions for COWL and will start shortly
bhill2: is this interesting stuff
to consider?
... some sense of interest?
<bhill2> rigo: have you considered as a use case offline workers?
<bhill2> ... you don't want e.g. firefox os phone to leak your address book only because it can
<bhill2> ... in this case the labeling has the function of protecting user-centric info
<bhill2> ... but how to your provide an interface for the user to manage privileges
<bhill2> ... e.g. to prevent an app from phoning home
<bhill2> deian: if you as a user are using e.g. facebook, you're specifying a policy about who can view your information
deian: thought about phones, but were mainly concentrating on server side
<bhill2> ... we can label that information through app UI interactions
in context of offline applications, lables are easier
<bhill2> rigo: if I am an attacker, I would say if I temporarily disallow phoning home, can I hide the information I want to leak and wait until the restriction is over
<bhill2> deian: if you store things, the mandatory label persists, so it is re-applied when it is read again
<bhill2> rigo: so there are additional requirements on, e.g. local storage
<bhill2> rigo: have you thought about abusing restrictions? block everything X?
<bhill2> deian: whoever you send data to, they have to decide to unlabel it themselves.
<bhill2> ... labels can only be raised by the context, not by another context
<bhill2> rigo: can I create e.g. DRM to e.g. prevent copy/paste?
bhill2: very interesting
discussion, first time we have seen it, have to bring it to the
list
... any strong objection to include it into the charter discussion
Dan: we had already lightweight isolation etc..
bhill2: should we consider this under "lightweight sandboxing" have some more on the table, COWL would be added to the list
Melinda: yes, just add
JeffH: yes, but not DOM manipulation
bhill2: some agenda discussion; Suborigin proposal
<inserted> scribenick: bhill2
jww: we have the origin security
primitive in the browser
... sometimes you can't create a new origin just to make something untrusted
<rigo> JW: often different things into the same origin, but need different levels of security
jww: would be great to be able to
split things into smaller origin pieces
... e.g. google.com is search, google.com/maps is a very different application
... that is on the same origin for historical and other reasons
... but has different security properties, etc.
... would be great to be able to treat them as different principals
... proposal allows creation of arbitrary namespaces for applications
freddy - better?
<freddyb> bhill2: much better. thanks!
<freddyb> also thanks to jww for repeating!
devd: dropbox would be very excited to use this
jww: current proposal is a csp
header with a tag for what suborigin you want to enter
... gets represented as an addendum to the scheme
... e.g. plugins have own way to deal with origins, would be nice to not have to rewrite that by putting into current origin token architecture
... this is basically named sandboxes
<freddyb> I wonder if there's overlap with scopes (existing in serviceworkers, manifests) and cookie path limitations ;)
<wseltzer> [start again in 10 min]
<inserted> scribenick: deian
<wseltzer> [resuming]
<bhill2>
bhill2: want to check in SRI, since there are news. what's going on? and let's look at
devd chrome has an implementation, we'll take a look. most people seem happy with the spec. same goes with the community
devd if anything we should be doing more
joel big question: https and http. should it apply to http?
two half: should SRI run on non-https pages. can you even make an integrity attribute? can you have integrity attribute to http content?
dveditz: not sure why you wouldn't want to do http. we don't want to relax MIX; as integrity guarantee why not allow it?
joel: integrity should not be allowed to run on insecure origins; i think we should ignore the integrity attribute since it's meaningless and attackers can modify
tanvi: not serverside attacks, only MITM
<freddyb> it's only meaningless in the face of _active_ attackers, not passive attackers (i.e. reading)
<freddyb> I agree with what I think I heard dev saying (that it still protects the compromised CDN case on plain http)
dveditz: i know we're trying to encourage tls, but it seems mean to not include http
joel from our perspective it is a little questionable on how you present this to developers. dveditz it's same as http content, so most of the time you get what you expect.
joel SRI gives you integrity, but it's not clearly useful since attacker can modify it.
dveditz: if things keeps getting breaking because of no https, then devs will lean to use tls
joel: if you have a MITM attack then you can just serve content with proper integrity attribute
should webcrypto occur on http?
can't do it in any meaningful way for http
bhill2: load resouce but not check integrity?
dveditz: two choices: not load resources OR spit out console message and say wheneter or not you checked the integrity
freddyb: not sure i got everything, but as far as i understand: heavily agree on MIX: should not relax this
on the other hand, afaiu not sure if we'll reach consensus on http vs https, maybe take offline
original question: i think it's a good idea to go forward with scripts only for now. seems like the most meaningful way to go forwar
joel: i would suggest style as well, but script is definitely the priority
<wseltzer>
wseltzer: dave ragget suggested (on truste & permission) and I'm looking to collect from dif places to address similar problems in similar ways. so if you're seeing this in other places, let me know. and if others (chris palmer started a thread) are itnerested please bring them in the discussion
bhill2: we have a subset that is not controverisal that people want to use now. there is the other subset that is controvertial. should we prune out the latter?
ship the first (level 1) and work on the second at a different pace
joel: seems like a good idea, given that even at google we can't reach consensus on the controvertial
bhill2: no relaxing of MIX warning. script & maybe style
what about object source?
devd not sure what the use for object src is?
bhill2: same as script
devd: concern for compatability. too many things that happen with object to deal with it now
<tanvi> downloads?
dveditz: would love to do it, but in practice it's hard. devd: sites can use CSP to restrict objects for now
bhill2: what about downloads?
tor exist nodes backdooring exe files. lots of download sites. unfortunately this brings back the http vs. https issue
would be nice to apply integrity to it
not the same threat model since it's not part of the DOM, but the threat model is worse
devd: what's the mixed content story for downloads?
tanvi: we block http downloads
<tanvi> well, not exactly
tanvi: depends, e.g., if it's
iframe we block, but if you navigate it's okay
... ^ does that capture ti?
<tanvi> yes :)
devd seems like a great case for using SRI for http. tanvi: download whole file and check integrity?
devd: browsers download then raname, we would check integrity before rename
joel: there were a bunch of
issues that were raised wrt to this
... controvertial in chromium team, but encouraging http downloads since we have integrity may lead to less https
mnot: what's the proposed experience when integrity fails? devd: same you get today: this may be malware
<freddyb> but integrity is not authenticity. I'd hope we could argue that "good" websites would really want to strive towards both (and the latter is only viable through HTTPS?)
<freddyb> ..thinking that integrity wouldnt really discourage https
because this imposes on author to keep hashes up to date, this may result in many false positives. may lead to people switching browsers. bhill2 what about user copying link and pasting it into addr bar
joel: UAs can make it harder to copy URLs that have integrity attributes
already do this for javascript:// urls
bhill2: doesn't have to have the malware warning
joel: we sould adjust according to what we see in the wild [that is the false positives]
bhill2: how do we reduce the click throughs? can keep iterating on this
mnot: ssl configuration is one problem. this is an authoring problem and I wonder how this would work out
bhill2: even if page is maliciously modified, some users will get around it
tanvi: seems like MITM is a
bigger threat.
... in 1st version, what if we keep it ambigous about http? joel: certainly okay
... for downlaods, i see a reason to keep it to https only, but for js not so much since there are so many examples of people pulling content from http places
bhill2: the attacker cost to compromise jquery cdn is smaller than everybody on https. mnot: devs have to be careful when they use SRI, since e.g., flicker may change source e.g., due to bugs
tanvi: how do we handle versioning?
how should sites deal with changes?
joel: is SRI useful? devd: we can give multiple hash values. if file names remain constant you may supply current, old version of hashes
if filename has part of hash, this is a solvable problem
bigger question is to ask jquery about this kind of versioning problems
we can run experiments for a few months and see how things go. we don't rely on latest version of jquery, we rely on specific version, so hashes just gives you more
joel: try to load latest from cdn, but if it's not what you expect, but you can load from your site
deian: what about signatures? bhill2: shakes/has seisure
<freddyb> are we going to replace an integrity problem with a key-sharing problem?
<freddyb> ;)
<tanvi> ;)
problem with signatures is that it doesn't really change the threat model and adds lots of complexities
<wseltzer> there is interest, now with webcrypto...
JeffH: +key management issues
dveditz: there is an xml signature proposal
mnot: we're interest in solving this and more general problem. though we talk about it at a transport layer where you have another identity. use case: im a bank, but don't trust cdn with everything (only with certain things)
<freddyb> (with SRI relying on CORS-enabled resources, one could do XMLHttpRequest to fetch, webcrypto to verify and blob URIs (=same origin) to load)
dveditz: lots of places where signatures are needed, but webappset is not the right place for this
jeffh: I'm imaging this useful/deployable with server side support. I have this web app, on the server side I check all the deps and compute the hashes. On the server side I can make sure that whatever I spit out to the client-side is correct
<wseltzer> [another use case is the user has out-of-band assurance at one point in time, and wants to pin that]
<tanvi> (i'd like to talk about caching when we have a chance.)
mnot: link headers/manifest seems like an alternative approach ...; bhill2: we're probably going to split spec into 2 specs: content addressable vs. the less controversial; devd: we have objections to objects, joe wants styles (maybe not so bad), but anchors and downloads are uncontrovesrial
billh2: it's worth trying (that's what we mean by uncontroversial), and see if it works/if people can put it to use
mnot: talk about or implement?
bhill2_: implement it enough and
actually learn from how people use
... we can speculate on how people use this, but i don't think we know until we put it out there and learn from data
JeffH: when you say object you mean <object>? dveditz and applet and embed tags; bhill2_ stuff covered by object-src
in csp
mnt: is it worth to talk about this in ters of fetch?
dveditz: spec is written in terms of fetch
bhill2_: mostly specified as
monkey patch to fetch
... lots of good reasons to specify in separate document. may have mutual dependencies. certainly we're going to need to figure out hot to normally reference fetch
... download is still controversial in terms of how it should work and how we should do it? dveditz controvertial in how you present it to people. devd browser vendors should decide this
joel: it is controversial in chrome to do http downloads. dveditz because of the http part from https? should we do downloads? joe: yes
devd its far more like that you have a download from http url on an https page
mnot: problem is that people publishing link and actual download are different people
what should we warn on? how will people react?; tanvi: if we put leave it ambigous then that would solve the problem for chrome too
jeol: we would be happier if everybody did secure downloads, but sure
devd: if chrome and ff differ on this part it's okay according to spec
bhill2_: how much will the cut simplify the spec
joel: it cuts out all the cache stuff, that's huge
<freddyb> we did not talk about reporting!
<freddyb> happy to leave it out for now, just remembered.
<dveditz> is there reporting in the spec?
<dveditz> I missed that
<tanvi> reporting is important to alert websites about third party libraries that have changed withotu their knowledge
<freddyb> dveditz: there is.
<freddyb> tanvi: in theory, you could get it from logs through the fallback mechanism
<dveditz> freddyb: what section?
<freddyb> dveditz: 4th paragraph this 3.5.4: "MUST repor a violation"
<freddyb> report*
devd: how painful the non-cononical src would be to implement?
<bhill2_> 3.5.2 in the editor's draft
<bhill2_> noncanonical-src attribute
dveditz: don't really understand why we would use the noncononical-src
if the source would change the you would fallback on your slow secure server
said tanvi
devd: simple enough attribute that we should do it
dveditz: I will ask; joe: I think it's complicated, but if we see enough value we should do it
<freddyb> +q
<freddyb> i.e., if (typeof jQuery === "undefined") { // create script node + same-origin jquery URL and add to DOM }
freddby: was wondering if you really want an authenticate same-origin version of the reource. what about: if jquery is not defined, the use shim to load from secure source? not saying we shouldn't do it in the spec; joe: interesting point: should we be defining what it means to fallback or is that application specific. tanvi: that makes it harder to implement apps
if it's cross-origin content: I can't take hash
tanvi: if we take noncon.. and ask devs to write own code to handle fallback, the adoption would be much harder
devd: i think devs already do this since they don't want to rely on jquery cdn to stay up
joe: if we don't offer it then we'll never know. devd: should this be in level 1 spec or level 2 spec. joe: I don't think it's controv, but it may be hard to implement
<freddyb> SRI requires cors-enabled though, which means you _can_ compute a hash manually.
dveditz: if you load nonc.. do
you still check hash? bhill2_: no, it's a fallback &
trusted
... I don't know it sites will use this; joel: proposal: rename src and backup-src
... src has to be what normally gets loaded
<freddyb> I too dislike "noncanonical-src" as an attribute name. "src" & "fallback" seems more intuitive.
joe: will update spec to use better names; tanvi: does fallback perform integrity check? devd: no, since the fallback should be for something that definitely works
<wseltzer> [have we ever done a privacy review of the various reporting options?]
dveditz: from the text it seems like report doesn't block so we should rename to report-only.
devd: spec needs editing
tanvi: how should reporting work? dveditz: piggy back on csp. devd: could trigger JS error
bhill2_: to wseltzer's question: we had external people look at it and it doesn't leak any addition info. dveditz there was an issue with reporting and redirects, but we fixed the spec; joe: it's been ad-hoc
wseltzer: bookmarklets could hook on to reporting to drop reports
dveditz: need to make sure that sites don't learn about what add-on's you have instaleld
tanvi: what about caching? if same resource is fetched from two sites. joes &devd: we're going to punt to level 2
wseltzer: thanks :)
bhill2_: to summarize 2 sides: this is not making anything worse, it's strictly making things better. dveditz if we load things by hash then we could have cache poisoning attacks and attribute attack to wrong person
devd this is a big can of works, let's just focus on level 1 and deal with rest later
bhill2: if you identify exact content. where it comes from doesn't matter. joe: there were lots of questions about caching headers, which I think is more controversial.
consnus on the pruning and working on the less controversial first
bhill2: there is an proposal to expose dns info to the browser. might be useful to have this group of people to look at this and evaluate what could possibly go wrong & tell them not to do certain things even if we don't consider it at webappsec; wseltzer this is from versign who want so to be involved. we should work with them and see what can be improved
<freddyb> thanks for taking notes, deian
<wseltzer> [lunch break]
<freddyb> it helped a great deal in syncing the audible bits into something meaningful \o/
freddyb: glad it helped, sorry for missing things :)
<freddyb> this raised hands here \o/ are unrelated to your lunch/afternoon plans
<freddyb> enjoy lunch, people!
<wseltzer> [now really adjourned]
|
http://www.w3.org/2011/webappsec/minutes/2014-10-28-webappsec-minutes.html
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
>
Hello...
Recently installed Unity on my PC (Windows 10), I have something that I find very odd, is that from one moment to another, open the Monodevelop and I mark error in a simple statement as
public GameObject referencesGO;
public GameObject referencesGO;
It's like GameObject not find the class. Besides not find Camera.main, Vector2 among other classes and do not understand why. Brand error in almost everything.
Thank you very much, I hope someone has the solution to this problem because it does not allow me to program anything.
you'll need to post more of your script to enable anyone to help you. a random line provides no context...
Answer by Bunny83
·
Sep 20, 2016 at 01:51 PM
There are two possibilities:
You removed the line using UnityEngine; at the top of your script
using UnityEngine;
You opened the script file alone inside MonoDevelop. When you open a script inside Unity, Unity will open the solution / project instead of just a single collider problems
1
Answer
SetActive is giving me a lot of issues
2
Answers
TextAsset / Resources.load return null
4
Answers
Facebook SDK - OpenSSL not found when I installed it and set the path multiple times already
0
Answers
Textures not being imported properly
2
Answers
|
https://answers.unity.com/questions/1244551/error-script-not-found-class-gameobject.html?sort=oldest
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
To implement a color picker, I want to draw a rectangle with a gradient of colors inside. I tried to use a container with a
DecoratedBox
It sounds like you already know how to draw a gradient and your question is more about how to make a
DecoratedBox as big as possible.
If your
DecoratedBox appears in a
Column or
Row, consider wrapping it in an
Expanded and setting the
crossAxisAlignment to
CrossAxisAlignment.stretch.
If your
DecoratedBox is a child of a widget that doesn't provide a size to its child (e.g.
Center), try wrapping it in a
ConstrainedBox with a
constraints of
new BoxConstraints.expand(). Here's an example:
import 'package:flutter/material.dart'; void main() { runApp(new MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return new MaterialApp( title: 'Gradient Example', home: new MyHomePage(), ); } } class MyHomePage extends StatelessWidget { @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( title: new Text('Gradient Example'), ), body: new Center( child: new ConstrainedBox( constraints: new BoxConstraints.expand(), child: new DecoratedBox( decoration: new BoxDecoration( gradient: new LinearGradient( colors: <Color>[Colors.red, Colors.blue] ), ), ), ), ), ); } }
|
https://codedump.io/share/qpicWkjr91JA/1/paint-an-gradient-on-screen
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
On Mon, 2007-10-08 at 10:41 -0600, Orion Poplawski wrote: > seth vidal wrote: > >. > > Didn't help. In the anaconda case: > > RPM build errors: > File not found by glob: > /var/tmp/anaconda-11.3.0.36-1.cora.1-root-mockbuild/usr/lib/python?.?/site-packages/pyisomd5sum.so > Installed (but unpackaged) file(s) found: > /usr/lib64/python2.5/site-packages/pyisomd5sum.so > > > Installed with: > > install -m 755 pyisomd5sum.so > $(DESTDIR)/usr/$(LIBDIR)/$(PYTHON)/site-packages > > LIBDIR is set with: > > FULLARCH := $(shell uname -m) > > ifneq (,$(filter ppc64 x86_64 s390x,$(FULLARCH))) > LIBDIR = lib64 > else > LIBDIR = lib > endif > > > $ mock --arch=i386 -r fedora-devel-i386 shell > init > mock-chroot> uname -m > x86_64 > > Looks like I need setarch? > > $ setarch i386 mock --arch=i386 -r fedora-devel-i386 shell > init > mock-chroot> uname -m > i686 > > Seems like --arch=i386 should take care of this. >
|
http://www.redhat.com/archives/fedora-devel-list/2007-October/msg00442.html
|
CC-MAIN-2014-42
|
en
|
refinedweb
|
.
It is the patch 3 for the 1.06d&1.06e. Probably you will be curious about because this patch is for both versions. The reason is how it works with both versions and the unique difference is to run with the new scenarios adding the Star Wars Galaxy, it is a feature only from the 1.06e.
Another point. Probably the new scenarios runs with game versions before 1.06d. Never with 1.06d.
For help you downloading the mod, I have set as Old-Obsolete the versions which you do not need if you have the game updated with the last 1.06e.
The last versions are set as Active.
DLC Lumens is only neccesary if you want play with the First Order or against this faction.
These are the additions from the patch 3:
-Cells improvements.
-Ships improvements.
-Descriptions improved.
-First Order can build all the ground units.
-New Galaxies scenarios added.
-Small fixes and improvements in other files.
-Previous patches additions.
Now. You will need these things if you want play the mod.
Moddb.com
+
Moddb.com
This is the basic installation.
-Uncompress the mod with winrar in the main game folder.
-Search the file REMEMBER.CFG into the new folder named Polaris_Sector_Alliance and cut it, paste the file into the folder ..\Documents\My Games\Polaris Sector\.. , overwrite the file into it.
-Launch the game. Remember, this version version from the mod is only translated to English. You must select the English in the game settings.
-Mod uninstall. Open the file REMEMBER.CFG into the folder ..\Documents\My Games\Polaris Sector\.. and replace the line CurrentModPath "Polaris_Sector_Alliance/" by //CurrentModPath "Polaris_Sector_Alliance/"
The mod adds a complete list of credits if you want to know about them.
Polaris Sector Alliance 1.0b converts the new 4x game from Slitherine at a Star Wars 4x game very similar at concept to the old SW Rebellion game but...
Polaris Sector is a new 4x game created by Softwarware and published by Slitherine. This game has a lot of potential and two very good features,.
Polaris Sector 1.06e - Alliance mod patch 5 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for people which they...
Polaris Sector 1.06e - Alliance mod patch 5 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for the mod main file...
Polaris Sector 1.06e - Alliance mod patch 4 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for the mod main file...
Polaris Sector 1.06e - Alliance mod patch 3 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for the mod main file...
This file adds new scenario Galaxies for the mod, just uncompress it in the folder where you installed the Polaris Sector Alliance overwritting all files...
A small patch for improve the First Order ships in battle and some other small things. Just overwrite the mod content with the files inside this why does the Star wars galaxy only have two races to select how can i change this thanks
Edit: I just downloaded the SWGalaxies maps and the Star wars galaxy ER says 100 stars 8 races but no races appear
Hi, I have the latest version installed and found a small bug.
When playing as the Imperials I did unlock Fighters LvL2 and got the TIE Interceptor and TIE AdvancedMK2/Avenger unlocked. Only problem is, one of the weapon pylon slots on the TIE Avenger seems to be 1 pixel too short. Its the right lower pylon (on the second deck so to say)... so I just can build in 5 of possible 6 small pylons.
If its not a big hassle, can you provide a hotfix for it? Or at least tell me how I can fix it myself? ^^ One of those would be great... I will pause my game till I can use those Avengers in their full glory. Thanks for this awesome mod :-)
Any troubleshotting question. Go to my site Firefoxccmods.com
Blocked at my site? send me a private message with your ip.
This comment is currently awaiting admin approval, join now to view.
Well, if you do not check it. A new game update was launched, the 1.06e. Probably the last version from the mod will work with it. However I will make a update sooner or later, more probably later because I want add somethings as a new customized Star Wars Galaxy. With few lucky, I can make it.
Wow, man! Thanks for the hard work. I am downloading right now and this mod looks awesome!
def is alot of fun !
I really enjoy playing your mod thank you for all your hard work I been addicted to the game since I downloaded it lol. Is this the final or are you adding more stuff in the future?
Final would be more correct.
Nomada_Firefox donde puedo encontrar el patch , no tengo el juego comprando.
podrias pasar el patch 1.04 para el mod?
Compra el juego por favor. Yo no doy soporte a nadie que lo use pirateado.
|
https://www.moddb.com/mods/star-wars-polaris-sector-alliance
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
What would be the best place to gather basic metrics?
We have multiple implementations spanning many namespaces and edges. I would like to see if I could identify a single place, perhaps on HSREGISTRY or HSBUS, that I could capture certain events like searches (from all customers) and record transfers (with requester and provider).
The goal is to have a dashboard that would show simple stats such as searches by participant, records shared by participant and records consumed by participant. These are the 3 most important.
I appreciated the feedback on the other question of "how" but now I'm hoping to find the "Where".
|
https://community.intersystems.com/post/what-would-be-best-place-gather-basic-metrics
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
#include "ltwrappr.h"
L_UINT LAutomation::GetUndoLevel(void);
Gets the current automation undo level.
The current automation undo level.
The undo level determines the number of automation operations that can be done within an automation container. If the undo level is set to the default value of DEF_AUTOMATION_UNDO_LEVEL [16], then each container associated with the automation handle has an undo level of 16.
To change the undo level call LAutomation::SetUndoLevel.
To undo an automation operation, call LAutomation::Undo.
To determine whether an automation operation can be undone, call LAutomation::CanUndo.
For information about grouping multiple operations into a single undo process, refer to LAutomation::AddUndoNode.
Required DLLs and Libraries
For an example, refer to LAutomation::SetUndoLevel.
Direct Show .NET | C API | Filters
Media Foundation .NET | C API | Transforms
Media Streaming .NET | C API
|
https://www.leadtools.com/help/sdk/v22/automation/clib/lautomation-getundolevel.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Technical Articles
Fiori Elements – How to develop a List Report – Basic Approach!
*****
Ok so the suspense is over. Having talked about what Fiori Elements are, how to design a list report, and having confirmed that a List Report is what you want, we move on to actually creating our first List Report… which will look just like the one above.
Tip: You can find this and previous blogs on Fiori Elements on the Fiori Elements wiki.
There are a few different ways of creating a List Report. We are going to start with the simplest approach to show how simple it really can be. By looking at the simplest approach we’ll:
- Gain a general understanding of how to work with Fiori Elements
- Understand the pre-requisites
- Delve underneath to see how Fiori Elements impacts the underlying SAPUI5 architecture
The use case we’ll focus on in this blog is:
- We have identified we want a new list report
- We have time to plan the list report
- We have decided to create a new data extract (as a ABAP CDS view) for the list report
- We will expose this ABAP CDS view as an OData Service
- Since the ABAP CDS view will be created specifically for this list report, we have decided to put all the annotations that will drive the Fiori Elements into the ABAP CDS view itself
What we won’t do in this blog is look in detail at the underlying annotations (remember these are like formal comments) in detail. We’ll do that in the next blog of this series as we look at an alternative way of working with Fiori Elements using something called a local annotations file.
Tip: You’ll need to know the alternate approach particularly if your OData service comes from an ABAP system lower than NetWeaver 7.4 SP05 or from a non-SAP system.
Of course like all Fiori app frameworks there are extension options we can use to take our list report that little bit further…but that’s getting ahead of ourselves, so let’s come back to basics.
The 3 essential steps to creating a Fiori Element List Report are:
- Prepare the OData service
- Prepare the UI Annotations
- Create a Fiori App using the Fiori Elements List Report template
Note that the easiest way to create the Fiori App is by using the SAP HCP Web IDE (or Personal Web IDE) and the List Report (formerly Smart Template) Application wizard. However it is possible to create your UI application manually in any text editor once you know what’s required.
We’ll look at each of these steps in turn but let’s start by confirming the pre-requisites for Fiori Elements generally.
Pre-requisites for Fiori Elements
The pre-requisites are simple.
- A web browser that can run a Fiori app, e.g. Microsoft Internet Explorer 11, Google Chrome, Firefox, etc.
- A frontend server that provides a SAPUI5 version capable of supporting the Fiori Elements (formerly known as Smart Templates) we want to use.
- If you are running an On-Premise Gateway system that means you need NetWeaver 7.50 SP1 or above. Naturally the higher the release, the more Fiori Elements and Fiori Elements features available.
- An OData Service to provide the data for the Fiori Element, e.g. to provide the data for the List Report.
Note also that currently Fiori Elements support OData version 2.0 using vocabulary-based annotations. Annotations are a standard but optional part of the OData paradigm.
One thing to notice right away is that the backend system or database which holds our data is not a limitation. We don’t need our backend system to be on a specific ABAP release or even S/4HANA, and we don’t need our data to be on a HANA database.
That said, if we are using a backend ABAP system of NetWeaver 7.4 SP05 or above or a S/4HANA system, the backend system provides some features that make it easier to create both the annotations and the OData Service. That makes building our List Report easier, which is why our first use case uses a S/4HANA system for the example.
Similarly a HANA database is not necessary, but if we have a HANA database underlying our backend system the performance is improved, especially when we want to include analytics or do keyword text searches.
Prepare the OData Service
First and foremost we prepare the OData service that will extract data from our backend system to be displayed in our List Report app.
When developing a List Report it is important our OData service supports the following OData features:
- $count – This will be used to show a count of items in the list
- $filter – This will be used to filter our list
- And of course as it’s a list, the usual paging features such as $top, $skip, etc.
Tip: When designing the OData service it’s worth considering what features we want to support in our List Report. Is it just a read-only list or should we include some CRUD (Create, Read, Update, Delete) features? Do we want to support Draft document handling? If so the OData service needs to support these.
Of course there are several ways to create an OData service. As this blog is focussed on a simple example, we use the quickest way to create an OData service that’s available in the latest ABAP and S/4HANA systems, which is:
- Define a ABAP CDS View to extract the data for the List Report
- Tip: ABAP Development Tools for Eclipse are needed to do this.
- The latest ADT can be found here
- Note that if you are on NetWeaver 7.51, the ADT has been recently updated to ADT Version 2.68
- Test the CDS View using the Data Preview Tool in the ADT to check the data is returned correctly
- Expose the ABAP CDS View as an OData service using the annotation @OData.publish: true
- Activate the OData service in the SAP Gateway so that it can be consumed by an OData client, such as our Fiori app
- Test the OData service in an OData Client (a web browser will do at a pinch) to check it returns the expected data
If you have never used CDS views before, then now’s the time to learn it! CDS views underly all the latest Fiori frameworks.
Creating CDS views has already been covered in the NetWeaver help for About ABAP Programming Model for SAP Fiori in section Define a Data Model Based on CDS Views
Similarly exposing a CDS view as an OData Service has been covered in the NetWeaver help section for About ABAP Programming Model for SAP Fiori in Expose CDS View as an OData Service
You can also find more on this topic in the SAPUI5 SDK > Developing apps with Fiori Elements (Smart Templates) > Preparing OData Services.
The example we are using was prepared for the Teched 2016 workshop DEV268 Building an End-to-End SAP Fiori App Based on SAP S/4HANA and ABAP. I was privileged to assist some of the team with the workshop – Jens Weiler, Ingo Braeuninger, and Chris Swanepoel – at the Las Vegas session.
Prepare the UI Annotations
Given we have created a CDS View to specifically support our List Report, we can add the annotations directly to the CDS View, that will look something like this… I’ve highlighted where the annotations are applied.
Whether placing the annotations directly into the CDS view definition is a good idea from an architecting standpoint as you scale UX is debatable. Mixing the annotations directly in with the CDS view raises some Separation of Concerns issues, and it can be rather annoying if we later want to reuse the same CDS View for a different Fiori Elements app.
Fortunately as of NetWeaver 7.51 that’s no longer a problem as explained in these blogs:
ABAP News Release 7.51 Meta Data Extensions ABAP CDS
Modularizing CDS Annotations
We’ll talk more about annotations in the next blog.
Create a Fiori App using the List Report template
So finally we are ready to create our app based on Fiori Elements. The simplest way to do this is to generate the app using the List Report Application wizard in the SAP HCP Web IDE.
As usual, we use the menu option File > New > Project from Template to access the app generation wizards.
On the Template Selection tab, we select the List Report Application (previously Smart Template Application) wizard.
On the Basic Information tab, we enter the Basic Information of our app – Project Name and Title. We can add a namespace as well if you wish.
On the Data Connection tab, we select our OData service as usual.
On the Annotation Selection tab, any annotation files sourced from the OData service are listed. Typically if we are using a SAP Gateway hosted OData service this includes:
- A service metadata xml generated by SAP Gateway
- The annotations assigned to the OData Service itself
On the Template Customization tab, we select the OData Collection (i.e. entity set) on which we want the list report to be based. If there are associated navigations then these can optionally be selected also.
Tip: These associated navigations are useful for displaying additional information when navigating from the List Report to the related Object Page and to subsequent Object Pages. We’ll get to the Object Page in a future blog.
Press Finish and our app is generated.
Provided our OData Service contains all the annotations needed to create a basic List Report … we can run our fully functioning app straight away!
Filters, grouping, sorting, multi-item selection, button and link navigation, and even the totals bar are all working immediately. We can even go further than that and add rating starts, progress bars, and charts… and we’ll get to those in a future blog.
Taking a Quick Look Under the Covers
So how does a Fiori Element app work? We can see some clues by taking a look at the structure of our generated app.
If you’ve created some custom apps before or implemented some SAP delivered Fiori apps, you’ll notice the app is structured a little differently from the SAPUI5 apps you are used to. There are no view, controller, or model folders and files. That’s because these will all be handled by the Fiori Element templates themselves.
Instead of view, control, and model folders and files, we see an annotations folder and annotation.xml file. This annotation.xml file is the local annotations file which we will look at more closely in the next blog. There are also several i18n property files.
Taking a close look also at the manifest.json file…. We see the annotations files have been added as “data sources”. As usual the i18n properties files are listed as “models”.
The real magic of a Fiori Element app is in the “sap.ui.generic.app” section (which you will find after the “sap.ui5” section).
This is where the List Report and Object Page dynamic templates are applied to our app. At runtime, the app applies these dynamic templates to the annotations in our project (including the ones we inherited from our OData Service) to generate a working, high quality, production-ready, SAPUI5 app.
What if…
So by now a few questions may be coming to mind:
- What happens if the OData Service doesn’t have annotations? (Remember annotations are an optional part of the OData paradigm)
- Where do we find a list of all the annotations?
- How does the List Report user interface map to the annotations?
- What features does the List Report provide and how can they be controlled by annotations?
And that’s what we’ll start to look at in the next blog as we delve into the local annotations file.
Screenshots shown on:
- ABAP Development Tools on Eclipse Mars
- Web IDE Version: 161103
- SAPUI5 version 1.40.12
Really nice blog, have done the same in hands-on sessions at TechEd Barcelona last week. As a follow-up question regarding system landscape: could I actually do this on the NetWeaver AS ABAP Developer Edition offered at, connected to my HCP trial account via local Cloud Connector?
Hi Alexander,
yes this is possible - feedback how it worked for you is also highly appreciated.
Cheers
Jens
Hi Jocelyn,
If we need to provide a range of values in the filter then what should we do, for example we need multiple company codes in filter criteria or a date range to be entered as input, in this case what should we do, Please advice.
Regards
Ik
Hi Ik
The list report uses a fully functioning Smart Filter bar.
Ranges work in the filter criteria as do date ranges for input in date fields.... with the usual provisos about checking for SAP Notes first as there were some early issues with dates in the filter bar.
So once you click into the filter field you can create all sorts of conditions.
There's an example of the Smart Filter Bar in the SAPUI5 Explored library.
The Fiscal Year filter shows you the default behaviour of filter fields.
Rgds
Jocelyn
Hi Jocelyn,
Do you have the full source code for the CDS shown with annotations in this blog.
Thanks,
Subba
Thanks for the clarification jocelyn. but for the normal case in a CDS the range input parameter is still not supported(with parameters option), please correct me if i am wrong.
Perhaps we are talking about different things IK?
You can certainly enter a range in a filter parameter in the Smart Filter Bar of a Fiori Element List Report e.g. Company code between 1000 and 1100.
However your response sounds like you may be talking about parameters directly in the CDS definition itself. Please note that the CDS definition itself is not the focus of this blog. If you need confirmation of CDS functionality that would be best handled as an ABAP question perhaps.
Hello Jocelyn,
Very nice blog.
With more Fiori elements on their way (i.e. no more SAPUI5 development for most of the use cases), what role UX developers will play?
Hi Vinod,
Don't worry! There's no intention to replace UX developers by covering all use cases with Fiori Elements. We just want to cover the most common repeatable pattern - the stuff that should be pretty boring to a developer.
For a good UX developer there's always a use case - especially for tasks tailored to the mental model of a specific user group or business role.
Rgds,
Jocelyn
Hi Jocelyn! I followed the link you posted to the wiki, very useful page, thanks! I have used another blog post in the past to setup an Elements app. I tried to edit the wiki page, but either I don't have access, or possibly I just can't find the edit action. Can you please add this to the list of how to guides:
How to use Smart Templates with SAP Web IDE
Also, I am looking forward to more blog posts and information on using Fiori Elements. There does not seem to be much documentation on the subject as of yet. We started to build an app using Elements, then ran into some roadblocks and restarted from scratch. Namely, we could not figure out how to navigate to other custom pages that weren't using the builtin ObjectPage and ListReport templates. As an example, we wanted to link to a page which we could control the view to switch between an add/edit function. Either we did not set something up properly, or there just isn't enough documentation or examples on how to do this. Because there is no routing and targets in the Elements app, we were not able to wrap our heads around how to get that part working. Time also became an issue, as it ended up being quicker for us to just create an app from scratch then spend a bunch more time digging through documentation, blog posts and Q/A to figure out what we wanted to do.
Hopefully there is more detailed documentation released on how to customize the Fiori Elements apps. There is also this link:
Sample Applications
Is there any way to download these sample apps so we can see how all of this is done? These sample apps look very well put together, and would be useful to have as an example when creating a new Elements app.
Thanks for you contributions!
Cheers,
Tim
Hi Tim Molloy ,
Thank you everyone for all the information on the blog. In response to your response, did you find a way to download it from sample applications?
Regards,
Özlem
HI Jocelyn,
Really like it, it helps a lot to clarify what the list report really is as the concept is actually not clearly described in the "Official Online Help".
Looking forward to your further blogs on this series.
Regards,
Marvin
Thanks Marvin. Still planning to get back into this series soon ...
Hi Jocelyn,
Really nice blog. You have written it in a way that made it really easy to understand for me.
I tried it out myself and it works well.
I have one question though:
I have followed your blog and created a simple sales order CDS model. I have created three CDS views.
I have navigation defined to and fro between SO <-> BP and SO <-> SI.
I used the List report template as illustrated by you. It works fine.
But I created a second project using List report template and this time in the 'Template Customizing' screen, instead of choosing 'Sales Order' entity set I choose 'Business Partner' entity set. I wanted to start the app with BP List and then navigate to the associated Sales order from there. This time the application ended up in error when I hit the search button on the List view initial screen.
I get the following error in my browser debugger console.
"Draft 2.0 object CDS_C_ZKU_SALESORDERTP_SADL_XA_X~C_Zku_So_BPTP requires selection condition on IsActiveEntity","propertyref":"","severity":"error","target":""
Question: Does this mean List Report template only work when data binding is done with the OData Entity Set corresponding to the CDS Root View(SO in my example)?
Thank you!
Best regards,
Krishnan
Hi Krishnan,
Ok based on your error message that isn't the problem you are experiencing. I take it you applied Draft Handling as per the ABAP Programming Model for Fiori - well done!
Once you apply Draft Handling the IsActivityEntity parameter is a mandatory parameter of the OData Service - so you have to provide it always.
As far as using a non root entity goes - you need to be able to access the entityset directly from the OData Service. So test your OData Service first to see if you have written your CDS View in a way that lets you access the BusinessPartner collection without first accessing the Sales Order collection
Hope that helps
Jocelyn
Hi Jocelyn,
I am using same example for CDS-BOPF, but our UI/UX is not UI5/FIORI but its different, so I thought instead of using annotation Odata:publish: true I opt for data reference as cds.
So metadata is coming fine for me, but while retrieving Salesorder or Item I am getting same error as Krishnan was getting--
Draft 2.0 object Z_BOPF~ZC_SALESORDERSHEAD requires selection condition on IsActiveEntity
I am not getting where I have to pass IsActivityEntity , do i need to change in MPC class or any alter in URL
I am using the URL like this
/sap/opu/odata/SAP/Z_BOPF_SRV/ZC_SalesOrdersHead
Please guide on on this issue.
Regards,
Abhijeet kankani
I got the solution for this.
As application using drafts the entity keys become composite, it will include one more key called “IsActiveEntity”, so while passing key fields one have to pass one more key
IsActiveEntity=true in URL and it start working.
Regards,
Abhijeet Kankani
Hi Kankani
I faced the same issue of “IsActiveEntity”.
But i change the url to /sap/opu/odata/sap/ZSD_SOHEADER_C_CDS/ZSD_SOHEADER_C?IsActiveEntity=true
the issue is persist
Could you give me some advice of this issue.
Thanks.
It is oData syntax
?$filter=IsActiveEntity%20eq%20true
Hi Jocelyn,
Really nice blog, it helped me a lot .
Is there also a documentation how to publish these apps to the Fiori GUI in an ABAP Backend server
or as stand alone app ?
Because we don’t have a Hana Cloud Fiori subscription and don't want it just to run in the WebIDE
Regards,
Thomas
Hi Thomas
You deploy your app to your runtime environment - whether that's an on-premise ABAP Gateway server or a Cloud Portal or Fiori Cloud edition or some other server - just like any other custom built Fiori app. That's a standard feature of the Web IDE - the help is here.
Rgds
Jocelyn
Hi Jocelyn,
Could you please help me understand if the future model for fiori apps in s/4 is going to be based on SADL with ABAP CDS annotations, BOPF and OData ?
So is it going to be completely template driven. Also the youtube videos on ABAP programming model for s/4 isn't helpful to me. Can we have short tutorials and most importantly good easy to understand documentation on CDS annotations.
Regards,
Prasenjit
Hi Prasenjit
The ABAP Programming Model for Fiori is the recommended approach when developing Fiori apps on NetWeaver ABAP servers.
However there is some flexibility and alternatives within it. For instance, we don't ever expect that it will be *completely* template driven - more that we try to cover as many common use cases as we can with templates.
This minimizes when you need to do full freestyle development - although of course you can develop the underlying OData Service in the same way even for freestyle apps.
Thanks for your feedback re the videos - I'm guessing you were looking at the 1-2 hour Teched replays. We are getting more information out there.. at SAP Inside Tracks and CodeJams. We also have more information explaining the approach being released as part of the Beacon project and keep checking SAP HANA Academy as well for tutorials and other material
Jocelyn
Hi Jocelyn,
Really nice blog, its useful for me .
But i have a doubt do i need to write OPA integration test and qunit test ?
If yes how should i write this test code ?
Regards,
Anjan.
Hi Anjan, You'll notice the test folder is still there so yes you should still do OPA and Qunit tests just the same as any other app
Rgds
Jocelyn
Hello, Jocelyn.
Very nice blog, but i have a question.
I've create ListReport using CDS + annotations+ SEGW. And now i need to add my own selection field on listReport without reference to CDS field.Can I do this with local annotation or abap code in *_MPC_EXT class?
Two things...
One.
How to populate PARAMETERS on a CDS view? I have a TABLE FUNCTION which requires parameters, and I can't figure out how to make that work. Code commented that did not work, so I hardcoded for Question 2.
TWO
How are you populating _item. The code above with hardcoded parameters to Table Function works, but the navigation to _item brings up an empty table.
FYI... the parameters, and navigation work fine in Eclipse. Argh.
Thanks,
Tim
Full View definition above
Thanks, Timothy.
I can’t figure out how to make work cds table function in ListReport too.
Maybe Jocelyn can help us ?
Hi folks, CDS Views with Parameters are not currently supported by Fiori elements. CDS Views with parameters are mainly used with analytical apps such as Smart Business tiles.
With a Fiori element we want the filters exposed as selection options for the user to determine. If you are wanting to default values for them then consider using User Defaults if this is a S/4HANA environment
Otherwise please look at the options for passing parameters to the app via external inbound navigation
If you have further question please make sure you raise them as question on answers.sap.com. Blog comments are not the proper place to resolve specific issues.
Jocelyn
Thanks, Jocelyn.
Hi Jocelyn - From what SAP UI5 version Fiori Elements is the CDS views with parameters supported from? It seems to work on version:
SAPUI5 Distribution 1.78.2 (built at 2020-06-03T15:27)
But at my client, we have:
SAPUI5-on-ABAP 1.52.41
And it does not work there.
Not sure how to proceed. I can have the client upgrade to 1.78.2 or try a work around but I am not sure.
Thanks,
Jay
When I run a List Report App works on SAP Web IDE but when using the SAP Web IDE Full-Stack and I run the app is blank nothing is showed.
There are some settings that has to be done when using SAP Web IDE Full Stack ?
Hi Rodrigo, No there isn't anything extra - they should work the same.
Just make sure you are starting your test by running it from the Component.js file.
Personally I have been finding a few problems with the Full Stack Web IDE recently so if that doesn't work it may be a bug. If you think it is that please raise an incident and perhaps send a tweet to @SAPCP
Rgds
Jocelyn
Yeah, was a general problem that occurred with all testing in SAP Web IDE Full Stack Trials accounts.
Thanks for the summary Jocelyn. Very helpful. Been watching this space for a while but we're just late on getting serious with the 'Fiori' journey with the team. I'm looking forward to reading all your 'back-blogs'. I'm finding that the configuration / Fiori elements approach eases some of the transition issues from older UI approaches. Some of the basic concepts easily maps/translates from ALV or FPM/UIBB. 🙂
Hi Wilbert, If you are attending or have a colleague attending Teched 2018 it's worth going to/getting the presentation from: sessions CNA215 and CNA216 - which put all of this into perspective for now vs. future. And for hands on workshops get to CNA379 or CNA381. 🙂
And CNA205 What's new in Fiori elements... a WYSYWIG editor coming at last !!!
Hi Jocelyn,
Im using datafieldwithURL in list report , but it doesnt render as a URL please help
Hi Jocelyn
A little list report related question:
I added some Actions (Function Imports) to the list report. No matter what I do, error messages from the backend are not displayed in the list report. I see the correct messages from BOPF action in the response header however.
Any hints on this?
@ Jocelyn Dart ,
I have a CDS view which has a date type field (ABAP.DATS), when I run in RSRT (since its a analytical view) or any multidimensional report date comes fine but when I see the same using List Report it comes in a different format
For Example - 10 Jan 2020 comes as -
Posting Date - 01/10/2020 (Jan 10 2020).
I have tried changing the metadata.xml file .. sap:display-format="Date" but it didn't work.
Can you please give some pointers ?
Best Regards,
Mayank Jaiswal
|
https://blogs.sap.com/2016/11/16/fiori-elements-how-to-develop-a-list-report-basic-approach/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Introduction
When a return statement is called in a function, the execution of this function is stopped. If specified, a given value is returned to the function caller. If the expression is omitted,
undefined is returned instead.
return expression;
Functions can return:
- Primitive values (string, number, boolean, etc.)
- Object types (arrays, objects, functions, etc.)
Never return something on a new line without using parentheses. This is a JavaScript quirk and the result will be undefined. Try to always use parentheses when returning something on multiple lines.
function foo() { return 1; } function boo() { return ( 1 ); } foo(); --> undefined boo(); --> 1
Examples
The following function returns the square of its argument, x, where x is a number.
function square(x) { return x * x; }
The following function returns the product of its arguments, arg1 and arg2.
function myfunction(arg1, arg2){ var r; r = arg1 * arg2; return(r); }
When a function
returns a value, the value can be assigned to a variable using the assignment operator (
=). In the example below, the function returns the square of the argument. When the function resolves or ends, its value is the
returned value. The value is then assigned to the variable
squared2.
function square(x) { return x * x; } let squared2 = square(2); // 4
If there is no explicit return statement, meaning the function is missing the
return keyword, the function automatically returns
undefined.
In the following example, the
square function is missing the
return keyword. When the result of calling the function is assigned to a variable, the variable has a value of
undefined.
function square(x) { let y = x * x; } let squared2 = square(2); // undefined
|
https://www.freecodecamp.org/news/javascript-return-statements/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Color plays an important role than any other aspect in the visualizations. When used effectively, color adds more value to the plot. A palette means a flat surface on which a painter arranges and mixes paints.
Seaborn provides a function called color_palette(), which can be used to give colors to plots and adding more aesthetic value to it.
seaborn.color_palette(palette = None, n_colors = None, desat = None)
The following table lists down the parameters for building color palette −
Return refers to the list of RGB tuples. Following are the readily available Seaborn palettes −
Besides these, one can also generate new palette
It is hard to decide which palette should be used for a given data set without knowing the characteristics of data. Being aware of it, we will classify the different ways for using color_palette() types −
We have another function seaborn.palplot() which deals with color palettes. This function plots the color palette as horizontal array. We will know more regarding seaborn.palplot() in the coming examples.
Qualitative or categorical palettes are best suitable to plot the categorical data.
from matplotlib import pyplot as plt import seaborn as sb current_palette = sb.color_palette() sb.palplot(current_palette) plt.show()
We haven’t passed any parameters in color_palette(); by default, we are seeing 6 colors. You can see the desired number of colors by passing a value to the n_colors parameter. Here, the palplot() is used to plot the array of colors horizontally.
Sequential plots are suitable to express the distribution of data ranging from relative lower values to higher values within a range.
Appending an additional character ‘s’ to the color passed to the color parameter will plot the Sequential plot.
from matplotlib import pyplot as plt import seaborn as sb current_palette = sb.color_palette() sb.palplot(sb.color_palette("Greens")) plt.show()
Note −We need to append ‘s’ to the parameter like ‘Greens’ in the above example.
Diverging palettes use two different colors. Each color represents variation in the value ranging from a common point in either direction.
Assume plotting the data ranging from -1 to 1. The values from -1 to 0 takes one color and 0 to +1 takes another color.
By default, the values are centered from zero. You can control it with parameter center by passing a value.
from matplotlib import pyplot as plt import seaborn as sb current_palette = sb.color_palette() sb.palplot(sb.color_palette("BrBG", 7)) plt.show()
The functions color_palette() has a companion called set_palette() The relationship between them is similar to the pairs covered in the aesthetics chapter. The arguments are same for both set_palette() and color_palette(), but the default Matplotlib parameters are changed so that the palette is used for all plots.
import numpy as np from matplotlib import pyplot as plt def sinplot(flip = 1): x = np.linspace(0, 14, 100) for i in range(1, 5): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) import seaborn as sb sb.set_style("white") sb.set_palette("husl") sinplot() plt.show()
Distribution of data is the foremost thing that we need to understand while analysing the data. Here, we will see how seaborn helps us in understanding the univariate distribution of the data.
Function distplot() provides the most convenient way to take a quick look at univariate distribution. This function will plot a histogram that fits the kernel density estimation of the data.
seaborn.distplot()
The following table lists down the parameters and their description −
These are basic and important parameters to look into.
11 Lectures 4 hours
DATAhill Solutions Srinivas Reddy
11 Lectures 2.5 hours
DATAhill Solutions Srinivas Reddy
|
https://www.tutorialspoint.com/seaborn/seaborn_color_palette.htm
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
ProxySettings class
Platform: AppiumLanguage: Java SDK:
An object used to set and retrieve the details of the proxy server used to interact with the Eyes server.
Import statement
import com.applitools.eyes.ProxySettings;
Constructor
- AbstractProxySettings()
- This is the constructor for the ProxySettings class.
Methods
- getPassword()
- The value returned by this method is the proxy password set when the object was created.
- getUsername()
- The value returned by this method is the proxy username set when the object was created.
|
https://applitools.com/docs/api/eyes-sdk/index-gen/class-proxysettings-appium-java.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
/*
* kernel/workqueue_internal.h
*
* Workqueue internal header file. Only to be included by workqueue and
* core kernel subsystems.
*/
#ifndef _KERNEL_WORKQUEUE_INTERNAL_H
#define _KERNEL_WORKQUEUE_INTERNAL_H
#include <linux/workqueue.h>
#include <linux/kthread.h>
struct worker_pool;
/*
* The poor guys doing the actual heavy lifting. All on-duty workers are
* either serving the manager role, on idle list or on busy hash. For
* details on the locking annotation (L, I, X...), refer to workqueue.c.
*
* Only to be used in workqueue and async.
*/
struct worker {
/* on idle list while idle, on busy hash table while busy */
union {
struct list_head entry; /* L: while idle */
struct hlist_node hentry; /* L: while busy */
};
struct work_struct *current_work; /* L: work being processed */
work_func_t current_func; /* L: current_work's fn */
struct pool_workqueue *current_pwq; /* L: current_work's pwq */
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
struct worker_pool *pool; /* I: the associated pool */
/* L: for rescuers */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
int id; /* I: worker id */
/* for rebinding worker to CPU */
struct work_struct rebind_work; /* L: for busy worker */
/* used only by rescuers to point to the target workqueue */
struct workqueue_struct *rescue_wq; /* I: the workqueue to rescue */
};
/**
* current_wq_worker - return struct worker if %current is a workqueue worker
*/
static inline struct worker *current_wq_worker(void)
{
if (current->flags & PF_WQ_WORKER)
return kthread_data(current);
return NULL;
}
/*
* Scheduler hooks for concurrency managed workqueue. Only to be used from
* sched.c and workqueue.c.
*/
void wq_worker_waking_up(struct task_struct *task, unsigned int cpu);
struct task_struct *wq_worker_sleeping(struct task_struct *task,
unsigned int cpu);
#endif /* _KERNEL_WORKQUEUE_INTERNAL_H */
Imprint & Privacy Policy
|
https://source.denx.de/Xenomai/ipipe/-/blame/b31041042a8cdece67f925e4bae55b5f5fd754ca/kernel/workqueue_internal.h
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Script that can’t be stopped
Hey so I got pythonista recently and so far I am LOVING it very much! However when working with try, except I found something odd. If you have a loop that runs forever in a try block and try to end the program, before the thread ends it runs the except block. Using this oversight you can make a script that will never end! (or until you reopen pythonista)
def dont_stop_me_now(): try: while True: print('oof') except: dont_stop_me_now() dont_stop_me_now()
I found this quite entertaining but thought it is more of a bug than a feature so here it is lol
This is a "feature" in Python and is indeed the expected behavior (though it's not exactly helpful). Stopping a script in Pythonista does the same thing as pressing Ctrl+C when you run Python in a terminal: it raises a
KeyboardInterruptexception. Normally that causes the script to cleanly stop, while still running all exception handlers. Of course that doesn't help if you catch the
KeyboardInterruptexception, which you are doing indirectly here with the unrestricted
exceptblock. As you found out, you can still stop your script by terminating the app though.
If you have a real script where you want to catch exceptions but not prevent stopping the script, you should add the exception type you want to catch to the
exceptblock, for example
except ValueError. If you want to catch all "normal" exceptions, use
except Exceptionrather than
except- the latter catches literally everything, whereas the former lets some "special" exceptions through (like
KeyboardInterrupt).
By the way, the code that you posted can actually be stopped, you just need to press the stop button roughly 1000 times. At some point you'll hit Python's default recursion limit, because every time you catch the exception, you call
dont_stop_me_nowrecursively.
Each time the "dont_stop_me_now" function is called, the return address is saved on the call stack; so that if the function ever did return, it would return to the caller. So, if you were to try to stop the program enough times, eventually it would run out of stack memory. This would take a very, very, long time though.
I'm not sure what Pythonista will do if it runs out of internal stack memory. I expect it would abort the program, but that's just a guess. Actually, all of that is a guess - but Python has to have a call stack, or it couldn't support recursion.
@technoway Python actually has a built-in limit on how deep the Python call stack can go. You can get the current limit with
sys.getrecursionlimit()and change it with
sys.setrecursionlimit(limit). The default limit is 1000, which is normally enough that you don't overflow the native stack even if you hit the Python recursion limit. In Pythonista that seems to be too much though, if I run an infinitely recursing function with the default limit, Pythonista crashes. With a lower limit (such as 500) I get a Python
RecursionErroras expected. I have no idea what the size of the native stack is on iOS, but it's probably lower than on Mac.
|
https://forum.omz-software.com/topic/4481/script-that-can-t-be-stopped
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Installing Objective-C Compiler
Setup the Compiler
Objective-C is a strict superset of C; the additional functionality is obtained by linking to the Objective-C library when building using the standard GNU GCC Compiler. This makes setting up a new compiler very simple, as we can make a copy of the standard compiler and change the linker settings.
Caution: Make sure your GNU GCC Compiler is properly setup before attempting to setup for Objective-C
1) Go to Settings->Compiler and debugger...
2) Select GNU GCC Compiler and make a copy it; name it whatever you like, but "GNU GCC Obj-C Compiler" would be the most descriptive.
3) Under Linker Settings, add -lobjc to Other linker options; you don't need to explicitly add the libobjc.a library, as the flag tells gcc to include it for us.
Adding Filetype Support
1) Go to Settings->Environment...
2) Select Files extension handling and add *.m
3) Go to Project->Project tree->Edit file types & categories...
4) Under Sources, add *.m to the list of filetypes.
Proper Syntax Highlighting
1) Go to Settings->Editor...
2) Select Syntax highlighting and go to Filemasks.... Add *.m to the list of filetypes.
3) Go to Keywords... (next to Filemasks...) and create a new set (up arrow). Add this to the Keywords: box:
Keywords
@interface @implementation @end @class @selector @protocol @public @protected @private id BOOL YES NO SEL nil NULL self
Note: if you feel so inclined, you could create a new custom lexer.
Optional Changes
1) Go to Settings->Compiler and dubugger...
2) Under Other Settings, change Compiler logging to Full command line. If ObjC still refuses to build properly for you, you can use this to compare the command line arguments C::B uses against the commands you would use if you were building the program manually on the command line.
3) Under Other Settings, go to Advanced Options. For Link object files to executable and Link object files to console executable, move -o $exe_output to the end of the macros. For reasons beyond my understanding, GCC will sometimes (albeit rarely) complain during complex builds if this isn't the last argument on the line.
Important Notes
1) By default, C::B will select CPP as the default compiler variable for a new source file, and the file will not be compiled or linked to a target. Whenever you add or create a new ObjC source (*.m) in your project, you must right-click on it and go to Properties.... Under advanced, change the compiler variable to CC. Under Build, select both Compile file and Link file. Before you close the dialog, go to General and uncheck File is read-only. This will automatically get selected when you change the other options and if you close the dialog before you uncheck it, you'll have to go back and change it, then close and reopen the file in the viewer before you can edit it.
2) When you add a header file (*.h), you'll also need to open up its properties window and change the compiler variable to CC. You don't need to do anything else to it.
Troubleshooting
There's a small handful of pitfalls you can fall into when attempting to compile Objective-C applications using TDM-GCC or MinGW, and these pitfalls are unfortunately not well documented. This section tries to identify and solve the currently identified issues; note that these are issues as of the current GCC 4.5.x branch, unless otherwise noted. There is currently no 4.6.x port of GCC for TDM-GCC/MinGW available; the GCC 4.6.x branch introduces an entirely new Objective-C library that brings the ObjC implementation inline with Apple's own implementation of the library. Many of these issues are likely solved in this newest release.
There has been some trouble with the -lobjc flag trying to link to libobjc.dll.a, which has been a nonfunctional shared library for some time. If this shared library is not removed before compilation, gcc will throw undefined reference to ... errors at every ObjC method or library call during the linking stage. This shared library must be removed, and in the case of TDM-GCC, is located at: [mingw install directory]/lib/gcc/[tdm install type]/[version]/libobjc.dll.a. If you've installed TDM-GCC x64, you must also remove the 32-bit copy of the shared library, located at: ...[version]/32/libobjc.dll.a.
BOOL Redefined
I haven't experienced this error as of TDM-GCC 4.5.2, but on TDM-GCC 4.5.1 and earlier, attempts to build a 32-bit ObjC application that also imports <windows.h> would produce a BOOL redefined error. This is the result of the windows headers defining their own version of BOOL; the headers do make checks to see if ObjC is being used (by checking if __OBJC__ is defined) and change various types and declarations to make themselves compatible, but the ObjC library doesn't define __OBJC__ because it also changes its declarations if this is already defined. You can't simply define __OBJC__ at the beginning of a program however, as it will cause a failure to build with libobjc.a. The proper way to patch in compatibility is to modify the end of objc.h (located at: .../[version]/include/objc/objc.h), and import it in your program before you import the windows headers.
Patch to objc.h
157 IMP objc_msg_lookup(id receiver, SEL op); 158 158 #define __OBJC__ /* Insert Patch Here */ 159 #ifdef __cplusplus 160 } 161 #endif 162 163 #endif /* not __objc_INCLUDE_GNU */
Test Build
Here's a bare-bones project you can throw together to test if your C::B settings will compile ObjC correctly. You can't actually test with just strict C, since any problems with the Objective-C compiler will only manifest when using ObjC functionality.
main.m
#import <stdlib.h> #import "TestObject.h" int main(int argc, char** argv) { TestObject *aTestObject = [[TestObject alloc] init]; printf("Initial Value: %d\n", [aTestObject value]); printf("+45 Value: %d\n", [aTestObject add: 45]); printf("-63 Value: %d\n", [aTestObject subtract: 63]); [aTestObject add: 103]; printf("+103 Value: %d\n", [aTestObject value]); return (EXIT_SUCCESS); }
TestObject.h
#import <objc/Object.h> @interface TestObject : Object { int internalInt; } - (int)add:(int)anInt; - (int)subtract:(int)anInt; - (int)value; @end
TestObject.m
#import "TestObject.h" @implementation TestObject - (id)init { if ((self = [super init])) { internalInt = 0; } return self; } - (int)add:(int)anInt { internalInt += anInt; return internalInt; } - (int)subtract:(int)anInt { internalInt -= anInt; return internalInt; } - (int)value { return internalInt; } @end
Objective-C Library Licensing
Unlike other libraries included with GCC, the Objective-C library may be statically linked into a project without extending the GNU GPL to that project. Although the library is covered under the GNU GPL itself, it has a special exemption in its license because it is a necessary library to compile a language.
License Exemption
/* */
|
https://wiki.codeblocks.org/index.php/Installing_Objective-C_Compiler
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
#include <fontconfig/fontconfig.h> FcBool FcDirSave (FcFontSet *set, FcStrSet *dirs, const FcChar8 *dir);
This function now does nothing aside from returning FcFalse. It used to creates the per-directory cache file for dir and populates it with the fonts in set and subdirectories in dirs. All of this functionality is now automatically managed by FcDirCacheLoad and FcDirCacheRead.
Fontconfig version 2.11.0
|
https://www.commandlinux.com/man-page/man3/FcDirSave.3.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
The surprising complexity of making something that is, on its surface, ridiculously simple
Progress bars are one of the most common, familiar UI components in our lives. We see them every time we download a file, install software, or attach something to an email. They live in our browsers, on our phones, and even on our TVs.
And yet — making a good progress bar is a surprisingly complex task!
In this post, I’ll describe all of the components of making a quality progress bar for the web, and hopefully by the end you’ll have a good understanding of everything you’d need to build your own.
This post describes everything I had to learn (and some things I didn’t!) to make celery-progress, a library that hopefully makes it easy to drop in dependency-free progress bars to your Django/Celery applications.
That said, most of the concepts in this post should translate across all languages/environments, so even if you don’t use Python you probably can learn something new.
Why Progress Bars?
This might be obvious, but just to get it out of the way — why do we use progress bars?
The basic reason is to provide users feedback for something that takes longer than they are used to waiting. According to kissmetrics, 40% of people abandon a website that takes more than 3 seconds to load! And while you can use something like a spinner to help mitigate this wait, a tried and true way to communicate to your users while they’re waiting for something to happen is to use a progress bar.
Generally, progress bars are great whenever something takes longer than a few seconds and you can reasonably estimate its progress over time.
Some examples include:
- When your application first loads (if it takes a long time to load)
- When processing a large data import
- When preparing a file for download
- When the user is in a queue waiting for their request to get processed
The Components of a Progress Bar
Alright, with that out of the way lets get into how to actually build these things!
It’s just a little bar filling up across a screen. How complicated could it be?
Actually, quite!
The following components are typically a part of any progress bar implementation:
- A front-end, which typically includes a visual representation of progress and (optionally) a text-based status.
- A backend that will actually do the work that you want to monitor.
- One or more communication channels for the front end to hand off work to the backend.
- One or more communication channels for the backend to communicate progress to the front-end.
Immediately we can see one inherent source of complexity. We want to both do some work in the backend and show that work happening on the frontend. This immediately means we will be involving multiple processes that need to interact with each other asynchronously.
These communication channels are where much of the complexity lies. In a relatively standard Django project, the front-end browser might submit an AJAX HTTP request (JavaScript) to the backend web app (Django). This in turn might pass that request along to the task queue (Celery) via a message broker (RabbitMQ/Redis). Then the whole thing needs to happen in reverse to get information back to the front end!
The entire process might look something like this:
Let’s dive into all of these components and see how they work in a practical example.
The Front End
The front end is definitely the easiest part of the progress bar. With just a few small lines of HTML/CSS, you can quickly make a decent looking horizontal bar using the background color and width attributes. Splash in a little JavaScript to update it and you’re good to go!
function updateProgress(progressBarElement, progressBarMessageElement, progress) { progressBarElement.style.backgroundColor = '#68a9ef'; progressBarElement.style.width = progress.percent + "%"; progressBarMessageElement.innerHTML = progress.current + ' of ' + progress.total + ' processed.'; } var trigger = document.getElementById('progress-bar-trigger'); trigger.addEventListener('click', function(e) { var barWrapper = document.getElementById('progress-wrapper'); barWrapper.style.display = 'inherit'; // show bar var bar = document.getElementById("progress-bar"); var barMessage = document.getElementById("progress-bar-message"); for (var i = 0; i < 11; i++) { setTimeout(updateProgress, 500 * i, bar, barMessage, { percent: 10 * i, current: 10 * i, total: 100 }) } })
The Backend
The backend is equally simple. This is essentially just some code that’s going to execute on your server to do the work you want to track. This would typically be written in whatever application stack you’re using (in this case Python and Django). Here’s an overly simplified version of what the backend might look like:
def do_work(self, list_of_work): for work_item in list_of_work: do_work_item(work_item) return 'work is complete'
Doing the Work
Okay so we’ve got our front-end progress bar, and we’ve got our work doer. What’s next?
Well, we haven’t actually said anything about how this work will get kicked off. So let’s start there.
The Wrong Way: Doing it in the Web Application
In a typical ajax workflow this would work the following way:
- Front-end initiates request to web application
- Web application does work in the request
- Web application returns a response when done
In a Django view, that would look something like this:
def my_view(request): do_work() return HttpResponse('work done!')
The wrong way: calling the function from the view
The problem here is that the
do_work function might do a lot of work that takes a long time (if it didn't, it wouldn't make sense to add a progress bar for it).
Doing a lot of work in a view is generally considered a bad practice for several reasons, including:
- You create a poor user experience, since people have to wait for long requests to finish
- You open your site up to potential stability issues with lots of long-running, work-doing requests (which could be triggered either maliciously or accidentally)
For these reasons, and others, we need a better approach for this.
The Better Way: Asynchronous Task Queues (aka Celery)
Most modern web frameworks have created asynchronous task queues to deal with this problem. In Python, the most common one is Celery. In Rails, there is Sidekiq (among others).
The details between these vary, but the fundamental principles of them are the same. Basically, instead of doing work in an HTTP request that could take arbitrarily long — and be triggered with arbitrary frequency — you stick that work in a queue and you have background processes — often referred to as workers — that pick the jobs up and execute them.
This asynchronous architecture has several benefits, including:
- Not doing long-running work in web processes
- Enabling rate-limiting of the work done — work can be limited by the number of worker-processes available
- Enabling work to happen on machines that are optimized for it, for example, machines with high numbers of CPUs
The Mechanics of Asynchronous Tasks
The basic mechanics of an asynchronous architecture are relatively simple, and involve three main components: the client(s), the worker(s), and the message broker.
The client is primarily responsible for the creation of new tasks. In our example, the client is the Django application, which creates tasks on user input via a web request.
The workers are the actual processes that do the work. These are our Celery workers. You can have an arbitrary number of workers running on however many machines, which allows for high availability and horizontal scaling of task processing.
The client and task queue talk to each other via a message broker, which is responsible for accepting tasks from the client(s) and delivering them to the worker(s). The most common message broker for Celery is RabbitMQ, although Redis is also a commonly used and feature complete message broker.
When building a standard celery application, you will typically do development of the client and worker code, but the message broker will be a piece of infrastructure that you just have to stand up (and beyond that can [mostly] ignore).
An Example
While this all sounds rather complicated, Celery does a good job making it quite easy for us via nice programming abstractions.
To convert our work-doing function to something that can be executed asynchronously, all we have to do is add a special decorator:
from celery import task # this decorator is all that's needed to tell celery this is a # worker task @task def do_work(self, list_of_work): for work_item in list_of_work: do_work_item(work_item) return 'work is complete'
Annotating a work function to be called from Celery
Similarly, calling the function asynchronously from the Django client is similarly straightforward:
def my_view(request): # the .delay() call here is all that's needed # to convert the function to be called asynchronously do_work.delay() # we can't say 'work done' here anymore # because all we did was kick it off return HttpResponse('work kicked off!')
Calling the work function asynchronously
With just a few extra lines of code, we’ve converted our work to an asynchronous architecture! As long as you’ve got your worker and broker processes configured and running, this should just work.
Tracking the Progress
Alrighty, so we’ve finally got our task running in the background. But now we want to track progress on it. So how does that work, exactly?
We’ll again need to do a few things. First we’ll need a way of tracking progress within the worker job. Then we’ll need to communicate that progress all the way back to our front-end so we can update the progress bar on the page. Once again, this ends up being quite a bit more complicated than you might think!
Using an Observer Object to Track Progress in the Worker
Readers of the seminal Gang of Four’s Design Patterns might be familiar with the observer pattern. The typical observer pattern includes a subject which tracks state, as well as one or more observers that do something in response to state. In our progress scenario, the subject is the worker process/function that is doing the work, and the observer is the thing that is going to track the progress.
There are many ways to link the subject and the observer, but the simplest is to just pass the observer in as an argument to the function doing the work.
That looks something like this:
@task def do_work(self, list_of_work, progress_observer): total_work_to_do = len(list_of_work) for i, work_item in enumerate(list_of_work): do_work_item(work_item) # tell the progress observer how many out of the total items # we have processed progress_observer.set_progress(i, total_work_to_do) return 'work is complete'
Using an observer to monitor work progress
Now all we have to do is pass in a valid
progress_observer and voilà, our progress will be tracked!
Getting Progress Back to the Client
You might be thinking “wait a minute… you just called a function called set_progress, you didn’t actually do anything!”
True! So how does this actually work?
Remember — our goal is to get this progress information all the way up to the webpage so we can show our users what’s going on. But the progress tracking is happening all the way in the worker process! We are now facing a similar problem we had with handing off the asynchronous task earlier.
Thankfully, Celery also provides a mechanism for passing messages back to the client. This is done via a mechanism called result backends, and, like brokers, you have the option of several different backends. Both RabbitMQ and Redis can be used as brokers and result backends and are reasonable choices, though there is technically no coupling between the broker and the result backend.
Anyway, like brokers, the details typically don’t come up unless you’re doing something pretty advanced. But the point is that you stick the result from the task somewhere (with the task’s unique ID), and then other processes can get information about tasks by ID by asking the backend for it.
In Celery, this is abstracted quite well via the
state associated with the task. The
state allows us to set an overall status, as well as attach arbitrary metadata to the task. This is a perfect place to store our current and total progress.
Setting the state
task.update_state( state=PROGRESS_STATE, meta={'current': current, 'total': total} )
Reading the state
from celery.result import AsyncResult result = AsyncResult(task_id) print(result.state) # will be set to PROGRESS_STATE print(result.info) # metadata will be here
Getting Progress Updates to the Front End
Now that we can get progress updates out of the workers / tasks and into any other client, the final step is to just get that information to the front end and display it to the user.
If you want to get fancy, you can use something like websockets to do this in real time. But the simplest version is to just poll a URL every so often to check on progress. We can just serve the progress information up as JSON via a Django view and process and render it client-side.
Django view:
def get_progress(request, task_id): result = AsyncResult(task_id) response_data = { 'state': result.state, 'details': self.result.info, } return HttpResponse( json.dumps(response_data), content_type='application/json' )
Django view to return progress as JSON.
JavaScript code:
function updateProgress (progressUrl) { fetch(progressUrl).then(function(response) { response.json().then(function(data) { // update the appropriate UI components setProgress(data.state, data.details); // and do it again every half second setTimeout(updateProgress, 500, progressUrl); }); }); }
Javascript code to poll for progress and update the UI.
Putting it All Together
This has been quite a lot of detail on what is — on its face — a very simple and everyday part of our lives with computers! I hope you’ve learned something.
If you need a simple way to make progress bars for you Django/celery applications you can check out celery-progress — a library I wrote to help make all of this a bit easier. There is also a demo of it in action on Build with Django.
Thanks for reading! If you’d like to get notified whenever I publish content like this on building things with Python and Django, please sign up to receive updates below!
Originally published at buildwithdjango.com.
|
https://www.freecodecamp.org/news/how-to-build-a-progress-bar-for-the-web-with-django-and-celery-12a405637440/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
The whatToShow argument allows you to iterate over only certain node types in a subtree. However, suppose you want to go beyond that. For example, you may have a program that reads XHTML documents and extracts all heading elements but ignores everything else. Or perhaps, you want to find all SVG content in a document, or all the GIFT elements whose price attribute has a value greater than $10.00. Or perhaps you want to find those SKU elements containing an ID of a product that needs to be reordered, as determined by consulting an external database. All of these tasks and many more besides can be implemented through node filters on top of a NodeIterator or a TreeWalker.
Example 12.5 summarizes the NodeFilter interface. You implement this interface in a class of your own devising. The acceptNode() method contains the the custom logic that decides whether any given node passes the filter or not. This method can return one of the three named constants NodeFilter.FILTER_ACCEPT, NodeFilter.FILTER_REJECT, or NodeFilter.FILTER_SKIP to indicate what it wants to do with that node.
Example 12.5. The NodeFilter interface
package org.w3c.dom.traversal; public interface NodeFilter { // Constants returned by acceptNode public static final short FILTER_ACCEPT = 1; public static final short FILTER_REJECT = 2; public static final short FILTER_SKIP = 3; // Constants for whatToShow public static final int SHOW_ALL = 0xFFFFFFFF; public static final int SHOW_ELEMENT = 0x00000001; public static final int SHOW_ATTRIBUTE = 0x00000002; public static final int SHOW_TEXT = 0x00000004; public static final int SHOW_CDATA_SECTION = 0x00000008; public static final int SHOW_ENTITY_REFERENCE = 0x00000010; public static final int SHOW_ENTITY = 0x00000020; public static final int SHOW_PROCESSING_INSTRUCTION = 0x00000040; public static final int SHOW_COMMENT = 0x00000080; public static final int SHOW_DOCUMENT = 0x00000100; public static final int SHOW_DOCUMENT_TYPE = 0x00000200; public static final int SHOW_DOCUMENT_FRAGMENT = 0x00000400; public static final int SHOW_NOTATION = 0x00000800; public short acceptNode(Node n); }
For iterators, there are really only two options for the return value of acceptNode(), FILTER_ACCEPT and FILTER_SKIP. NodeIterator treats FILTER_REJECT the same as FILTER_SKIP. (Tree walkers do make a distinction between these two.) Rejecting a node prevents it from appearing in the list, but does not prevent its children and descendants from appearing. They will be tested separately.
The NodeFilter does not override whatToShow. They work in concert. For example, whatToShow can limit the iterator to only elements. Then the acceptNode() method can confidently cast every node that’s passed to it to Element without first checking its node type.
To configure an iterator with a filter, pass the NodeFilter object to the createNodeIterator() method. The NodeIterator then passes each potential candidate node to the acceptNode() method to decide whether or not to include it in the iterator.
For an example, let’s revisit last chapter’s DOMSpider program. That program needed to recurse through the entire document, looking at each and every node to see whether or not it was an element and, if it was, whether or not it had an xlink:type attribute with the value simple. We can write that program much more simply using a NodeFilter to find the simple XLinks and a NodeIterator to walk through them. Example 12.6 demonstrates the necessary filter.
Example 12.6. An implementation of the NodeFilter interface
import org.w3c.dom.traversal.NodeFilter; import org.w3c.dom.*; public class XLinkFilter implements NodeFilter { public static String XLINK_NAMESPACE = ""; public short acceptNode(Node node) { Element candidate = (Element) node; String type = candidate.getAttributeNS(XLINK_NAMESPACE, "type"); if (type.equals("simple")) return FILTER_ACCEPT; return FILTER_SKIP; } }
Here’s a spider() method that has been revised to take advantage of NodeIterator and this filter. This can replace both the spider() and findLinks() methods of the previous version. The filter replaces the isSimpleLink() method. The code is quite a bit simpler than the version in the last chapter.
public void spider(String systemID) { currentDepth++; try { if (currentDepth < maxDepth) { Document document = parser.parse(systemID); process(document, systemID); Vector uris = new Vector(); // search the document for uris, // store them in vector, and print them DocumentTraversal traversal = (DocumentTraversal) document; NodeIterator xlinks = traversal.createNodeIterator( document.getDocumentElement(),// start at root element NodeFilter.SHOW_ELEMENT, // only see elements new XLinkFilter(), // only see simple XLinks true // expand entities ); Element xlink; while ((xlink = (Element) xlinks.nextNode()) != null) { String uri = xlink.getAttributeNS(XLINK_NAMESPACE, "href"); if (!uri.equals("")) { try { String wholePage = absolutize(systemID, uri); if (!visited.contains(wholePage) && !uris.contains(wholePage)) { uris.add(wholePage); } } catch (MalformedURLException e) { // If it's not a good URL, then we can't spider it // anyway, so just drop it on the floor. } } // end if } // end while xlinks.detach(); Enumeration e = uris.elements(); while (e.hasMoreElements()) { String uri = (String) e.nextElement(); visited.add(uri); spider(uri); } } } catch (SAXException e) { // Couldn't load the document, // probably not well-formed XML, skip it } catch (IOException e) { // Couldn't load the document, // likely network failure, skip it } finally { currentDepth--; System.out.flush(); } }
There is, however, one feature the earlier version had that this NodeIterator based variant doesn’t have. Last chapter’s DOMSpider tracked xml:base attributes. Since the xml:base attributes may appear on ancestors of the XLinks rather than on the XLinks themselves, a NodeIterator really isn’t appropriate for tracking them. The key problem is that xml:base has hierarchical scope. That is, an xml:base attribute only applies to the element on which it appears and its descendants. While the filter could easily be adjusted to notice elements that have xml:base attributes as well as those that have xlink:type="simple" attributes, an iterator really can’t tell which other elements any given xml:base attribute applies to.
DOM Level 3 will add a getBaseURI() method to the Node interface that will alleviate the need to track xml:base attributes manually. In fact, this will be even more effective than the manual tracking of last chapter’s example, because it will also notice different base URIs that arise from external entities. Revising the spider() method to take advantage of this only requires changing a couple of lines of code as follows:
String wholePage = absolutize(xlink.getBaseURI(), uri);
Unfortunately, this method is not yet supported by any common parsers. However, it should be implemented in the not too distant future.
|
http://www.ibiblio.org/xml/books/xmljava/chapters/ch12s02.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
import "k8s.io/kubernetes/pkg/registry/storage/volumeattachment"
Package volumeattachment provides Registry interface and its REST implementation for storing volumeattachment api objects.
StatusStrategy is the default logic that applies when creating and updating VolumeAttachmentStatus subresource via the REST API.
var Strategy = volumeAttachmentStrategy{legacyscheme.Scheme, names.SimpleNameGenerator}
Strategy is the default logic that applies when creating and updating VolumeAttachment objects via the REST API.
Package volumeattachment imports 13 packages (graph) and is imported by 1 packages. Updated 2019-08-14. Refresh now. Tools for package owners.
|
https://godoc.org/k8s.io/kubernetes/pkg/registry/storage/volumeattachment
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
The command-line utility lock_lint analyzes the use of
mutex and multiple
readers/single writer locks, and reports on inconsistent use of these locking
techniques that may lead to data races and deadlocks in multi-threaded applications.
In the multithreading model, a process consists of one or
more threads of control that share a common address space and most
other process resources. Threads must acquire and release locks
associated with the data they share. If they fail to do so, a data
race could result, causing the program to produce different
results when rerun with the same input.
Data races are easy to introduce. Simply accessing a variable
without first acquiring the appropriate lock can cause a data race.
But data race situations are generally very difficult to find.
Symptoms generally manifest themselves only if two threads access the
improperly protected data at nearly the same time; hence a data race
may easily run correctly without showing any signs of a problem. It
is extremely difficult to exhaustively test all concurrent states of
even a simple multithreaded program, so conventional testing and
debugging are not always an adequate defense against data races.
Most processes share several resources. Operations within the
application may require access to more than one of those resources.
This means that the operation needs to grab a lock for each of the
resources before performing the operation. If different operations
use a common set of resources, but the order in which they acquire
the locks is inconsistent, there is a potential for deadlock.
The simplest case of deadlock occurs when two threads hold locks for
different resources and each thread tries to acquire the lock for the
resource held by the other thread.
When analyzing locks and how they are used, LockLint (the command
is lock_lint) detects a common cause of data races: failure
to hold the appropriate lock while accessing a variable.
The following tables list the routines of the Solaris OS and POSIX
libthread APIs recognized by LockLint.
TABLE 1 Mutex (Mutual
Exclusion) Locks
Solaris
POSIX
Kernel (Solaris only)
mutex_lock
mutex_lock
mutex_unlock
mutex_unlock
mutex_trylock
mutex_trylock
pthread_mutex_lock
pthread_mutex_lock
pthread_mutex_unlock
pthread_mutex_unlock
pthread_mutex_trylock
pthread_mutex_trylock
mutex_enter
mutex_enter
mutex_exit
mutex_exit
mutex_tryenter
mutex_tryenter
TABLE 2 Reader -Writer
Locks
rw_rdlockrw_wrlock rw_unlock
rw_tryrdlockrw_trywrlock
rw_rdlockrw_wrlock rw_unlock
rw_tryrdlockrw_trywrlock
rw_enter
rw_enter
rw_exit
rw_exit
rw_tryenter
rw_tryenter
rw_downgrade
rw_downgrade
rw_tryupgrade
rw_tryupgrade
TABLE 3 Condition
Variables
cond_broadcast
cond_broadcast
cond_wait
cond_wait
cond_timedwait
cond_timedwait
cond_signal
cond_signal
pthread_cond_broadcast
pthread_cond_broadcast
pthread_cond_wait
pthread_cond_wait
pthread_cond_timedwait
pthread_cond_timedwait
pthread_cond_signal
pthread_cond_signal
cv_broadcast
cv_broadcast
cv_wait
cv_wait
cv_wait_sig
cv_wait_sig
cv_wait_sig_swap
cv_wait_sig_swap
cv_timedwait
cv_timedwait
cv_timedwait_sig
cv_timedwait_sig
cv_signal
cv_signal
Additionally, LockLint recognizes the structure types shown in
Table 4 .
TABLE 4 Lock Structures
mutex_t
pthread_mutex_t
kmutex_t
rwlock_t
krwlock_t
LockLint reports several kinds of basic information about the
modules it analyzes, including:
Locking side effects of functions. Unknown side effects can
lead to data races or deadlocks.
Accesses to variables that are not consistently protected by
at least one lock, and accesses that violate assertions about which
locks protect them. This information can point to a potential data
race.
Cycles and inconsistent lock-order acquisitions. This
information can point to potential deadlocks.
Variables that were protected by a given lock. This can
assist in judging the appropriateness of the chosen granularity,
that is, which variables are protected by which locks.
LockLint provides subcommands for specifying assertions about the
application. During the analysis phase, LockLint reports any
violation of the assertions.
Note - Add assertions liberally, and use the
analysis phase to refine assertions and to make sure that new code
does not violate the established locking conventions of the program.
The compiler gathers the information used by LockLint. More
specifically, you specify a command-line option, -Zll, to
the C compiler to generate a .ll file for each .c
source code file. The .ll file contains information about
the flow of control in each function and about each access to a
variable or operation on a mutex or readers-writer lock.
Zll
ll
c
Note - No .o file is produced when
you compile with the -Zll flag.
.o
-Zll
There are two ways for you to interact with LockLint: source code
annotations and the command-line interface.
Source code annotations are assertions and NOTEs
that you place in your source code to pass information to LockLint.
LockLint can verify certain assertions about the states of locks at
specific points in your code, and annotations can be used to verify
that locking behavior is correct or avoid unnecessary error
warnings.
NOTEs
Alternatively, you can use LockLint subcommands to load the
relevant .ll files and make assertions. This interface to
LockLint consists of a lock_lint
command and a set of subcommands that you specify on the lock_lint
command line.
.ll
lock_lint
The important features of the lock_lint subcommands
are:
You can exercise a few additional controls that have no
corresponding annotations.
You can make a number of useful queries about the functions,
variables, function pointers, and locks in your program.
LockLint subcommands help you analyze your code and discover which
variables are not consistently protected by locks. You may make
assertions about which variables are supposed to be protected by a
lock and which locks are supposed to be held whenever a function is
called. Running the analysis with such assertions in place will show
you where the assertions are violated.
Most programmers report that they find source code annotations
preferable to command-line subcommands. However, there is not always
a one-to-one correspondence between the two.
Using LockLint consists of three steps:
1. Setting up the environment for using LockLint
2. Compiling the source code to be analyzed, producing the
LockLint database files (.ll files)
3. Using the lock_lint command to run a LockLint
session
These steps are described in the rest of this section.
Figure 1 shows the flow control of tasks involved in using
LockLint:
FIGURE 1 LockLint Control Flow
Use LockLint to refine the set of assertions you maintain for the
implementation of your system. A rich set of assertions enables
LockLint to validate existing and new source code as you work.
The LockLint interface consists of the lock_lint
command, which is executed in a shell, and the lock_lint
subcommands. By default, LockLint uses the shell given by the
environment variable $SHELL. Alternatively, LockLint can
execute any shell by specifying the shell to use on the lock_lint
start command. This example starts a LockLint session in the
Korn shell:
$SHELL
lock_lint
start
% lock_lint start /bin/ksh
LockLint creates an environment variable called LL_CONTEXT,
which is visible in the child shell. If you are using a shell that
provides for initialization, you can arrange to have the lock_lint
command source a .ll_init file in your home directory, and
then execute a .ll_init file in the current directory if
it exists. If you use csh, you can do this by inserting
the following code into your .cshrc file:
LL_CONTEXT,
.ll_init
csh
.cshrc
if ($?LL_CONTEXT) then if ( -x $(HOME)/.ll_init ) source $(HOME)/.ll_initendif
It is better not to have your .cshrc source the
file in your current working directory, since others may want to run
LockLint on those same files, and they may not use the same shell you
do. Since you are the only one who is going to use your
$(HOME)/.ll_init, you should source that one, so
that you can change the prompt and define aliases for use during your
LockLint session. The following version of ~/.ll_init does
this for csh:
$(HOME)/.ll_init
~/.ll_init
# Cause analyze subcommand to save state before analysis.alias analyze "lock_lint save before analyze;\ lock_lint analyze"# Change prompt to show we are in lock_lint.set prompt="lock_lint~$prompt"
When executing subcommands, remember that you can use pipes,
redirection, backward quotes (`), and so on to accomplish your aims.
For example, the following command asserts that lock foo
protects all global variables (the formal name for a global variable
begins with a colon):
foo
% lock_lint assert foo protects `lock_lint vars | grep ^:`
lock_lint assert foo protects `lock_lint vars | grep ^:`
In general, the subcommands are set up for easy use with filters
such as grep and sed. This is particularly true
for vars and funcs, which put out a single line
of information for each variable or function. Each line contains the
attributes (defined and derived) for that variable or function. The
following example shows which members of struct bar are
supposed to be protected by member lock:
grep
sed
vars
funcs
bar
lock
% lock_lint vars -a `lock_lint members bar` | grep =bar::lock
lock_lint vars -a `lock_lint members bar` | grep =bar::lock
Since you are using a shell interface, a log of user commands can
be obtained by using the shell's history function (the history level
may need to be made large in the .ll_init file).
LockLint puts temporary files in /var/tmp unless
$TMPDIR is set.
/var/tmp
$TMPDIR
To modify your makefile to produce .ll files, first use
the rule for creating a .o from a .c to write a
rule to create a .ll from a .c. For example,
from:
.c
# Rule for making .o from .c in ../src.%.o: ../src/%.c $(COMPILE.c) -o $@ $<
you might write:
# Rule for making .ll from .c in ../src.%.ll: ../src/%.c cc $(CFLAGS) $(CPPFLAGS) $(FOO) $<
In the above example, the -Zll flag would have to be
specified in the make macros for compiler options (CFLAGS
and CPPFLAGS).
CFLAGS
CPPFLAGS
If you use a suffix rule, you will need to define .ll
as a suffix. For that reason some prefer to use % rules.
%
If the appropriate .o files are contained in a make
variable FOO_OBJS, you can create FOO_LLS with
the line:
FOO_OBJS
FOO_LLS
FOO_LLS = ${FOO_OBJS:%.o=%.ll}
or, if they are in a subdirectory ll:
FOO_LLS = ${FOO_OBJS:%.o=ll/%.ll}
If you want to keep the .ll files in subdirectory ll/,
you can have the makefile automatically create this file with the
label:
ll/
.INIT: @if [ ! -d ll ]; then mkdir ll; fi
For LockLint to analyze your source code, you must first compile
it using the -Zll option of the Sun Studio C compiler. The
compiler then produces the LockLint database files (.ll
files), one for each .c file compiled. Later you load the
.ll files into LockLint with the load
subcommand.
load
LockLint sometimes needs a simpler view of the code to return
meaningful results during analysis. To allow you to provide this
simpler view, the -Zll option automatically defines the
preprocessor symbol __lock_lint; further discussions of
the likely uses of __lock_lint can be found in .
__lock_lint
subcommands that can be specified with the lock_lint
command:
lock_lint [subcommand]
lock_lint [
]
In this example subcommand is one of a set of subcommands
used to direct the analysis of the source code for data races and
deadlocks. More information about subcommands can be found in the
summary at the end of this article, or in the lock_lint(1)
man page.
The first subcommand of any LockLint session must be start,
which starts a subshell of your choice with the appropriate LockLint
context. Since a LockLint session is started within a subshell, you
exit by exiting that subshell. For example, to exit LockLint when
using the C shell, use the command exit.
start
exit
LockLint's state consists of the set of databases loaded
and the specified assertions. Iteratively modifying that state and
rerunning the analysis can provide optimal information on potential
data races and deadlocks. Since the analysis can be done only once
for any particular state, the save, restore,
and refresh subcommands are provided as a means to
reestablish a state, modify that state, and retry the analysis.
save
restore,
refresh
Annotate your source code and compile it to create .ll
files.
Load the .ll files using the load
subcommand.
Make assertions about locks protecting functions and
variables using the assert subcommand.
assert
Make assertions about the order in which locks should be
acquired in order to avoid deadlocks, using the assert order
subcommand. Note - These specifications may also be
conveyed using source code annotations.
assert order
Check that LockLint has the right idea about which functions
are roots. If the funcs -o subcommand does not
show a root function as root, use the declare root
subcommand to fix it. If funcs -o shows a non-root
function as root, it's likely that the function should be listed as
a function target using the declare ... targets
subcommand.
funcs -o
declare root
declare
... targets
Describe any hierarchical lock relationships (if you have
any--they are rare) using the assert rwlock subcommand.
Note - These specifications may also be conveyed
using source code annotations.
assert rwlock
Tell LockLint to ignore any functions or variables you want
to exclude from the analysis using the ignore subcommand.
Be conservative in your use of the ignore
command. Make sure you should not be using one of the source code
annotations instead (for example, NO_COMPETING_THREADS_NOW).
ignore
NO_COMPETING_THREADS_NOW
Run the analysis using the analyze subcommand.
analyze
Investigate the errors. This may involve modifying
the source using #ifdef __lock_lint (see ) or adding
source code annotations to accomplish steps 3, 4, 6, and 7.
Restore LockLint to the state it was in before the analysis
and rerun the analysis as necessary. Note - It is
best to handle the errors in order. Otherwise, problems with locks
not being held on entry to a function, or locks being released while
not held, can cause lots of misleading messages about variables not
being properly protected.
#ifdef __lock_lint
Run the analysis using the analyze -v subcommand
and repeat the above step.
analyze -v
When the errors from the analyze subcommand are
gone, check for variables that are not properly protected by any
lock. Use the command: lock_lint vars -h | fgrep \*
Rerun the analysis using appropriate assertions to find out
where the variables are being accessed without holding the proper
locks. Remember that you cannot run analyze twice
for a given state, so it will probably help to save the state of
LockLint using the save subcommand before running
analyze. Then restore that state using refresh
or restore before adding more assertions. You may want to
set up an alias for analyze that automatically does a
save before analyzing.
lock_lint vars -h | fgrep \*
restore
LockLint acquires its information on the sources to be analyzed
with a set of databases produced by the C compiler. The LockLint
database for each source file is stored in a separate file. To
analyze a set of source files, use the load subcommand to
load their associated database files.
The files subcommand can be used to display a list of
the source files represented by the loaded database files. Once a
file is loaded, LockLint knows about all the functions, global data,
and external functions referenced in the associated source files.
files
As part of the analysis phase, LockLint builds a call graph
for all the loaded sources. Information about the functions defined
is available via the funcssubcommand. It is extremely
important for a meaningful analysis that LockLint have the correct
call graph for the code to be analyzed.
All functions that are not called by any of the loaded files are
called root functions. You may want to treat certain
functions as root functions even though they are called within the
loaded modules. For example, the function is an entry point for a
library that is also called from within the library. Do this by using
the declare root subcommand.
LockLint knows about all the references to function pointers
and most of the assignments made to them. Information about the
function pointers in the currently loaded files is available through
the funcptrs subcommand. Information about the calls made
via function pointers is available via the pointer calls
subcommand. If there are function pointer assignments that LockLint
could not discover, they may be specified with the declare
... targets subcommand.
funcptrs
pointer calls
... targets
By default, LockLint tries to examine all possible execution
paths. If the code uses function pointers, it's possible that many of
the execution paths are not actually followed in normal operation of
the code. This can result in the reporting of deadlocks that do not
really occur. To prevent this, use the disallow and
reallow subcommands to inform LockLint of execution paths
that never occur. To print out existing constraints, use the reallows
and disallows subcommands.
disallow
reallow
reallows
disallows
LockLint database also contains information about all global
variables accessed in the source code. Information about these
variables is available via the vars subcommands.
One of LockLint's jobs is to determine if variable accesses are
consistently protected. If you are unconcerned about accesses to a
particular variable, you can remove it from consideration by using
the ignore subcommand.
You may also consider using one of the following source code
annotations, as appropriate.
SCHEME_PROTECTS_DATA
SCHEME_PROTECTS_DATA
READ_ONLY_DATA
READ_ONLY_DATA
DATA_READABLE_WITHOUT_LOCK
DATA_READABLE_WITHOUT_LOCK
NOW_INVISIBLE_TO_OTHER_THREADS
NOW_INVISIBLE_TO_OTHER_THREADS
NOW_VISIBLE_TO_OTHER_THREADS
NOW_VISIBLE_TO_OTHER_THREADS
Source code annotations are an efficient way to refine the
assertions you make about the locks in your code. There are three
types of assertions: protection, order, and
side effects.
Protection assertions state what is protected by a given lock. For
example, the following source code annotations can be used to assert
how data is protected.
MUTEX_PROTECTS_DATA
MUTEX_PROTECTS_DATA
RWLOCK_PROTECTS_DATA
RWLOCK_PROTECTS_DATA
RWLOCK_COVERS_LOCK
RWLOCK_COVERS_LOCK
A variation of the assert subcommand is used to assert
that a given lock protects some piece of data or a function. Another
variation, assert ... covers, asserts that a
given lock protects another lock; this is used for hierarchical
locking schemes.
... covers
Order assertions specify the order in which the given locks
must be acquired. The source code annotation LOCK_ORDER or
the assert order subcommand can be used to specify lock
ordering.
LOCK_ORDER
Side effect assertions state that a function has the side effect
of releasing or acquiring a given lock. Use the following source code
annotations:
MUTEX_ACQUIRED_AS_SIDE_EFFECT
MUTEX_ACQUIRED_AS_SIDE_EFFECT
READ_LOCK_ACQUIRED_AS_SIDE_EFFECT
READ_LOCK_ACQUIRED_AS_SIDE_EFFECT
WRITE_LOCK_ACQUIRED_AS_SIDE_EFFECT
WRITE_LOCK_ACQUIRED_AS_SIDE_EFFECT
LOCK_RELEASED_AS_SIDE_EFFECT
LOCK_RELEASED_AS_SIDE_EFFECT
LOCK_UPGRADED_AS_SIDE_EFFECT
LOCK_UPGRADED_AS_SIDE_EFFECT
LOCK_DOWNGRADED_AS_SIDE_EFFECT
LOCK_DOWNGRADED_AS_SIDE_EFFECT
NO_COMPETING_THREADS_AS_SIDE_EFFECT
NO_COMPETING_THREADS_AS_SIDE_EFFECT
COMPETING_THREADS_AS_SIDE_EFFECT
COMPETING_THREADS_AS_SIDE_EFFECT
You can also use the assert side effect subcommand to
specify side effects. In some cases you may want to make side effect
assertions about an external function and the lock is not visible
from the loaded module. For example, it is static to the module of
the external function. In such a case, you can "create" a
lock by using a form of the declare subcommand.
assert side effect
LockLint's primary role is to report on lock usage inconsistencies
that may lead to data races and deadlocks. The analysis of
lock usage occurs when you use the analyze subcommand. The
result is a report on the following problems:
Functions that produce side effects on locks or violate
assertions made about side effects on locks. For example, a function
that changes the state of a mutex lock from locked to unlocked. The
most common unintentional side effect occurs when a function
acquires a lock on entry, and then fails to release it at some
return point. That path through the function is said to acquire the
lock as a side effect. This type of problem may lead to both data
races and deadlocks.
Functions that have inconsistent side effects on locks (that
is, different paths through the function) yield different side
effects. This may be a limitation of LockLint and a common cause of
errors. LockLint cannot handle such functions. It always reports
them as errors and does not correctly interpret them. For example,
one of the returns from a function may forget to unlock a lock
acquired in the function.
Violations of assertions about which locks should be held
upon entry to a function. This problem may lead to a data race.
Violations of assertions that a lock should be held when a
variable is accessed. This problem may lead to a data race.
Violations of assertions that specify the order in which
locks are to be acquired. This problem may lead to a deadlock.
Failure to use the same, or asserted, mutex lock for all
waits on a particular condition variable.
Miscellaneous problems related to analysis of the source code
in relation to assertions and locks.
After analysis, you can use LockLint subcommands for:
Finding additional locking inconsistencies.
Forming appropriate declare, assert,
and ignore subcommands. These can be specified after
you've restored LockLint's state, prior to rerunning the analysis.
One such subcommand is order, which you can use to make
inquiries about the order in which locks have been acquired. This
information is particularly useful in understanding lock ordering
problems and making assertions about those orders so that LockLint
can more accurately diagnose potential deadlocks.
order
Another such subcommand is vars. The vars
subcommand reports which locks are consistently held when a variable
is read or written (if any). This information can be useful in
determining the protection conventions in code where the original
conventions were never documented, or the documentation has become
outdated.
There are limitations to LockLint's analysis. At the root of many
of its difficulties is the fact that LockLint doesn't know the values
of the program's variables.
LockLint solves some of these problems by ignoring the likely
cause or making simplifying assumptions. You can avoid some other
problems by using conditionally compiled code in the application.
Towards this end, the compiler always defines the preprocessor macro
__lock_lint when you compile with the -Zll
option. You can use this macro to make your code less ambiguous.
LockLint has trouble deducing:
Which functions your function pointers point to. There are
some assignments LockLint cannot deduce. The declare
subcommand can be used to add new possible assignments to the
function pointer. When LockLint sees a call through a
function pointer, it tests that call path for every possible value
of that function pointer. If you know or suspect that some calling
sequences are never executed, use the disallow and
reallow subcommands to specify which sequences are
executed.
Whether or not you locked a lock in code like this:
if (x) pthread_mutex_lock(&lock1);
In this case, two execution paths are created, one holding
the lock, and one not holding the lock, which will probably cause
the generation of a side effect message at the unlock
call. You may be able to work around this problem by using the
__lock_lint macro to force LockLint to treat a lock as
unconditionally taken. For example:
unlock
#ifdef __lock_lintpthread_mutex_lock(&lock1);#elseif (x) pthread_mutex_lock(&lock1);#endif
LockLint has no problem analyzing code like this:
if (x) { pthread_mutex_lock(&lock1); foo(); pthread_mutex_unlock(&lock1);}
In this case, there is only one execution path, along which the
lock is acquired and released, causing no side effects.
Whether or not a lock was acquired in code like this:
rc = pthread_mutex_trylock(&lock1);if (rc) ...
Which lock is being locked in code like this:
pthread_mutex_t* lockp;pthread_mutex_lock(lockp);
In such cases, the lock call is ignored.
Which variables and locks are being used in code where
elements of a structure are used (see Lock Inversions):
struct foo* p;pthread_mutex_lock(p->lock);p->bar = 0;
Which element of an array is being accessed. This is treated
analogously to the previous case; the index is ignored.
Anything about longjmps.
longjmps
When you would exit a loop or break out of a recursion (so it
just stops proceeding down a path as soon as it finds itself looping
or after one recursion).
Some other LockLint difficulties:
LockLint only analyzes the use of mutex locks and
readers-writer locks. LockLint performs limited consistency checks
of mutex locks as used with condition variables. However, semaphores
and condition variables are not recognized as locks by LockLint.
Even with this analysis, there are limits to what LockLint can make
sense of.
There are situations where LockLint thinks two different
variables are the same variable, or that a single variable is two
different variables. (See Lock Inversions .)
It is possible to share automatic variables between threads
(via pointers), but LockLint assumes that automatics are unshared,
and generally ignores them (the only situation in which they are of
interest to LockLint is when they are function pointers).
LockLint complains about any functions that are not
consistent in their side effects on locks. #ifdef's and
assertions must be used to give LockLint a simpler view of functions
that may or may not have such a side effect.
#ifdef
During analysis, LockLint may produce messages about a lock
operation called rw_upgrade. Such a call does not really
exist, but LockLint rewrites code like
rw_upgrade
if (rw_tryupgrade(&lock1)) { ... }
as
if () { rw_tryupgrade(&lock1); ... }
such that, wherever rw_tryupgrade() occurs, LockLint
always assumes it succeeds.
rw_tryupgrade()
One of the errors LockLint flags is an attempt to acquire a lock
that is already held. However, if the lock is unnamed (for example,
foo::lock), this error is suppressed, since the name
refers not to a single lock but to a set of locks. However, if the
unnamed lock always refers to the same lock, use the declare
one subcommand so that LockLint can report this type of
potential deadlock.
foo::lock
one
If you have constructed your own locks out of these locks (for
example, recursive mutexes are sometimes built from ordinary
mutexes), LockLint will not know about them. Generally you can use
#ifdef to make it appear to LockLint as though an ordinary
mutex is being manipulated. For recursive locks, use an unnamed lock
for this deception, since errors won't be generated when it is
recursively locked. For example:
void get_lock() { #ifdef __lock_lint struct bogus *p; pthread_mutex_lock(p->lock); #else <the real recursive locking code> #endif}
An annotation is some piece of text inserted into your source
code. You use annotations to tell LockLint things about your program
that it cannot deduce for itself, either to keep it from excessively
flagging problems or to have LockLint test for certain conditions.
Annotations also serve to document code, in much the same way that
assertions and NOTEs.
NOTE
Annotations are similar to some of the LockLint subcommands
described in the command-line summary. In general, it's preferable to
use source code annotations over these subcommands, as explained
There are several reasons to use source code annotations. In many
cases, such annotations are preferable to using a script of LockLint
subcommands.
Annotations, being mixed in with the code that they describe,
are generally better maintained than a script of LockLint
subcommands.
With annotations, you can make assertions about lock state at
any point within a function--wherever you put the assertion is where
the check occurs. With subcommands, the finest granularity you can
achieve is to check an assertion on entry to a function.
Functions mentioned in subcommands can change. If someone
changes the name of a function from func1 to func2,
a subcommand mentioning func1 fails (or worse, might work
but do the wrong thing, if a different function is given the name
func1).
func1
func2
Some annotations, such as NOTE(NO_COMPETING_THREADS_NOW),
have no subcommand equivalents.
NOTE(NO_COMPETING_THREADS_NOW
Annotations provide a good way to document your program. In
fact, even if you are not using LockLint often, annotations are
worthwhile just for this purpose. For example, a header file
declaring a variable can document what lock or convention protects
the variable, or a function that acquires a lock and deliberately
returns without releasing it can have that behavior clearly declared
in an annotation.
LockLint shares the source code annotations scheme with several
other tools. When you install the Sun Studio C Compiler, you
automatically install the file SUNW_SPRO-cc-ssbd, which
contains the names of all the annotations that LockLint understands.
The file is located in installation_directory/SUNWspro/prod/lib/note.
SUNW_SPRO-cc-ssbd
/SUNWspro/prod/lib/note
You can specify a location other than the default by setting the
environment variable NOTEPATH, as in
NOTEPATH
setenv NOTEPATH other_location:$NOTEPATH
The default value for NOTEPATH is
installation_directory/SUNWSPRO/prod/lib/note:/usr/lib/note
/SUNWSPRO/prod/lib/note:/usr/lib/note
To use source code annotations, include the file note.h
in your source or header files:
note.h
#include <note.h>
Many of the note-style annotations accept names--of locks or
variables--as arguments. Names are specified using the syntax shown
in .
TABLE 5 Specifying
Names With LockLint NOTEs
Syntax
Meaning
Var
Named variable
Var.Mbr.Mbr...
Member of a named struct/union variable
struct
union
Tag
Unnamed struct/union (with this tag)
struct/union
Tag::Mbr.Mbr...
Member of an unnamed struct/union
struct/union
Type
Unnamed struct/union (with this typedef)
Type::Mbr.Mbr...
In C, structure tags and types are kept in separate namespaces,
making it possible to have two different structs by the
same name as far as LockLint is concerned. When LockLint sees
foo::bar, it first looks for a struct with tag
foo; if it does not find one, it looks for a type foo
and checks that it represents a struct.
foo::bar
However, the proper operation of LockLint requires that a given
variable or lock be known by exactly one name. Therefore type
will be used only when no tag is provided for the struct,
and even then only when the struct is defined as part of a
typedef.
typedef
For example, Foo would serve as the type name in this
example:
Foo
typedef struct { int a, b; } Foo;
typedef struct { int a, b; } Foo;
These restrictions ensure that there is only one name by which the
struct is known.
Name arguments do not accept general expressions. It is not valid,
for example, to write:
NOTE(MUTEX_PROTECTS_DATA(p->lock, p->a p->b))
NOTE(MUTEX_PROTECTS_DATA(p->lock, p->a p->b))
However, some of the annotations do accept expressions (rather
than names); they are clearly marked.
In many cases an annotation accepts a list of names as an
argument. Members of a list should be separated by white space. To
simplify the specification of lists, a generator mechanism similar to
that of many shells is understood by all annotations taking such
lists. The notation for this is:
Prefix{A B ...}Suffix
where Prefix, Suffix, A, B, ... are
nothing at all, or any text containing no white space. The above
notation is equivalent to:
PrefixASuffix PrefixBSuffix ...
For example, the notation:
struct_tag::{a b c d}
struct_tag::{a b c d}
is equivalent to the far more cumbersome text:
struct_tag::a struct_tag::b struct_tag::c struct_tag::d
struct_tag::a struct_tag::b struct_tag::c struct_tag::d
This construct may be nested, as in:
foo::{a b.{c d} e}
foo::{a b.{c d} e}
which is equivalent to:
foo::a
foo::a
foo::b.c
foo::b.c
foo::b.d
foo::b.d
foo::ae
foo::ae
Where an annotation refers to a lock or another variable, a
declaration or definition for that lock or variable should already
have been seen.
If a name for data represents a structure, it refers to all
non-lock (mutex or readers-writer) members of the structure. If one
of those members is itself a structure, then all of its non-lock
members are implied, and so on. However, LockLint understands the
abstraction of a condition variable and therefore does not break it
down into its constituent members.
_NOTE
The NOTE interface enables you to insert information
for LockLint into your source code without affecting the compiled
object code. The basic syntax of a note-style annotation is either:
NOTE
NOTE(NoteInfo)
NOTE(
)
or:
_NOTE(NoteInfo)
_NOTE(
The preferred use is NOTE rather than _NOTE.
Header files that are to be used in multiple, unrelated projects,
should use _NOTE to avoid conflicts. If NOTE
has already been used, and you do not want to change, you should
define some other macro (such as ANNOTATION) using _NOTE.
For example, you might define an include file (say, annotation.h)
that contains the following:
ANNOTATION
annotation.h
#define ANNOTATION _NOTE#include <sys/note.h>
The NoteInfo that gets passed to the NOTE
interface must syntactically fit one of the following:
NoteName
NoteName(Args)
NoteName is simply an identifier indicating the type of
annotation. Args can be anything, so long as it can be
tokenized properly and any parenthesis tokens are matched (so that
the closing parenthesis can be found). Each distinct NoteName
will have its own requirements regarding arguments.
This text uses NOTE to mean both NOTE and
_NOTE, unless explicitly stated otherwise.
NOTE may be invoked only at certain well-defined places
in source code:
At the top level; that is, outside of all function
definitions, type and struct definitions, variable
declarations, and other constructs. For example:
struct foo { int a, b; mutex_t lock; };NOTE(MUTEX_PROTECTS_DATA(foo::lock, foo))bar() {...}
At the top level within a block, among declarations or
statements. Here too, the annotation must be outside of all type and
struct definitions, variable declarations, and other
constructs. For example:
foo() { ...; NOTE(...) ...; ...; }
At the top level within a struct or union
definition, among the declarations. For example:
struct foo { int a; NOTE(...) int b; };
NOTE() may be used only in the locations described
above. For example, the following are invalid:
NOTE()
a = b NOTE(...) + 1;
a = b NOTE(...) + 1;
typedef NOTE(...) struct foo Foo;
typedef NOTE(...) struct foo Foo;
for (i=0; NOTE(...) i<10; i++) ...
for (i=0; NOTE(...) i<10; i++) ...
A note-style annotation is not a statement; NOTE() may
not be used inside an if/else/for/while
body unless braces are used to make a block. For example, the
following causes a syntax error:
if
else
for
while
if (x) NOTE(...)
if (x) NOTE(...)
The following annotations are allowed both outside and inside a
function definition. Remember that any name mentioned in an
annotation must already have been declared.
NOTE(MUTEX_PROTECTS_DATA(Mutex,
DataNameList))
NOTE(MUTEX_PROTECTS_DATA(
,
))
NOTE(RWLOCK_PROTECTS_DATA(Rwlock,
DataNameList))
NOTE(RWLOCK_PROTECTS_DATA(
NOTE(SCHEME_PROTECTS_DATA("description",
DataNameList))
NOTE(SCHEME_PROTECTS_DATA("
",
The first two annotations tell LockLint that the lock should be
held whenever the specified data is accessed.
The third annotation, SCHEME_PROTECTS_DATA, describes
how data are protected if it does not have a mutex or readers-writer
lock. The description supplied for the scheme is simply text
and is not semantically significant; LockLint responds by ignoring
the specified data altogether. You may make description
anything you like.
Some examples help show how these annotations are used. The first
example is very simple, showing a lock that protects two variables:
mutex_t lock1;int a,b;NOTE(MUTEX_PROTECTS_DATA(lock1, a b))
In the next example, a number of different possibilities are
shown. Some members of struct foo are protected by a
static lock, while others are protected by the lock on foo.
Another member of foo is protected by some convention
regarding its use.
mutex_t lock1;struct foo { mutex_t lock; int mbr1, mbr2; struct { int mbr1, mbr2; char* mbr3; } inner; int mbr4;};NOTE(MUTEX_PROTECTS_DATA(lock1, foo::{mbr1 inner.mbr1}))NOTE(MUTEX_PROTECTS_DATA(foo::lock, foo::{mbr2 inner.mbr2}))NOTE(SCHEME_PROTECTS_DATA("convention XYZ", inner.mbr3))
A datum can only be protected in one way. If multiple annotations
about protection (not only these three but also READ_ONLY_DATA)
are used for a single datum, later annotations silently override
earlier annotations. This allows for easy description of a structure
in which all but one or two members are protected in the same way.
For example, most of the members of struct BAR
below are protected by the lock on struct foo,
but one is protected by a global lock.
BAR
mutex_t lock1;typedef struct { int mbr1, mbr2, mbr3, mbr4;} BAR;NOTE(MUTEX_PROTECTS_DATA(foo::lock, BAR))NOTE(MUTEX_PROTECTS_DATA(lock1, BAR::mbr3))
NOTE(READ_ONLY_DATA(DataNameList))
NOTE(READ_ONLY_DATA(
This annotation is allowed both outside and inside a function
definition. It tells LockLint how data should be protected. In this
case, it tells LockLint that the data should only be read, and not
written.
Note - No error is signaled if read-only data
is written while it is considered invisible. Data is considered
invisible when other threads cannot access it; for example, if
other threads do not know about it.
This annotation is often used with data that is initialized and
never changed thereafter. If the initialization is done at runtime
before the data is visible to other threads, use annotations to let
LockLint know that the data is invisible during that time.
LockLint knows that const data is read-only.
const
NOTE(DATA_READABLE_WITHOUT_LOCK(DataNameList))
NOTE(DATA_READABLE_WITHOUT_LOCK(
This annotation is allowed both outside and inside a function
definition. It informs LockLint that the specified data may be read
without holding the protecting locks. This is useful with an
atomically readable datum that stands alone (as opposed to a set of
data whose values are used together), since it is valid to peek at
the unprotected data if you do not intend to modify it.
NOTE(RWLOCK_COVERS_LOCKS(RwlockName,
LockNameList))
NOTE(RWLOCK_COVERS_LOCKS
This annotation is allowed both outside and inside a function
definition. It tells LockLint that a hierarchical relationship exists
between a readers-writer lock and a set of other locks. Under these
rules, holding the cover lock for write access affords a thread
access to all data protected by the covered locks. Also, a thread
must hold the cover lock for read access whenever holding any of the
covered locks.
Using a readers-writer lock to cover another lock in this way is
simply a convention; there is no special lock type. However, if
LockLint is not told about this coverage relationship, it assumes
that the locks are being used according to the usual conventions and
generates error messages.
The following example specifies that member lock of
unnamed foo structures covers member lock of
unnamed bar and zot structures:
zot
NOTE(RWLOCK_COVERS_LOCKS(foo::lock, {bar zot}::lock))
NOTE(RWLOCK_COVERS_LOCKS(foo::lock, {bar zot}::lock))
NOTE(MUTEX_ACQUIRED_AS_SIDE_EFFECT(MutexExpr))
NOTE(MUTEX_ACQUIRED_AS_SIDE_EFFECT(
NOTE(READ_LOCK_ACQUIRED_AS_SIDE_EFFECT(RwlockExpr))
NOTE(READ_LOCK_ACQUIRED_AS_SIDE_EFFECT(
NOTE(WRITE_LOCK_ACQUIRED_AS_SIDE_EFFECT(RwlockExpr))
NOTE(WRITE_LOCK_ACQUIRED_AS_SIDE_EFFECT(
NOTE(LOCK_RELEASED_AS_SIDE_EFFECT(LockExpr))
NOTE(LOCK_RELEASED_AS_SIDE_EFFECT(
NOTE(LOCK_UPGRADED_AS_SIDE_EFFECT(RwlockExpr))
NOTE(LOCK_UPGRADED_AS_SIDE_EFFECT(
NOTE(LOCK_DOWNGRADED_AS_SIDE_EFFECT(RwlockExpr))
NOTE(LOCK_DOWNGRADED_AS_SIDE_EFFECT(
NOTE(NO_COMPETING_THREADS_AS_SIDE_EFFECT)
NOTE(NO_COMPETING_THREADS_AS_SIDE_EFFECT)
NOTE(COMPETING_THREADS_AS_SIDE_EFFECT)
NOTE(COMPETING_THREADS_AS_SIDE_EFFECT)
These annotations are allowed only inside a function definition.
Each tells LockLint that the function has the specified side effect
on the specified lock--that is, that the function deliberately leaves
the lock in a different state on exit than it was in when the
function was entered. In the case of the last two of these
annotations, the side effect is not about a lock but rather about the
state of concurrency.
When stating that a readers-writer lock is acquired as a side
effect, you must specify whether the lock was acquired for read or
write access.
A lock is said to be upgraded if it changes from being
acquired for read-only access to being acquired for read/write
access. Downgraded means a transformation in the opposite
direction.
LockLint analyzes each function for its side effects on locks (and
concurrency). Ordinarily, LockLint expects that a function will have
no such effects; if the code has such effects intentionally, you must
inform LockLint of that intent using annotations. If it finds that a
function has different side effects from those expressed in the
annotations, an error message results.
The annotations described in this section refer generally to the
function's characteristics and not to a particular point in the code.
Thus, these annotations are probably best written at the top of the
function. There is, for example, no difference (other than
readability) between this:
foo() { NOTE(MUTEX_ACQUIRED_AS_SIDE_EFFECT(lock_foo)) ... if (x && y) { ... }}
and this:
foo() { ... if (x && y) { NOTE(MUTEX_ACQUIRED_AS_SIDE_EFFECT(lock_foo)) ... }}
If a function has such a side effect, the effect should be the
same on every path through the function. LockLint complains about and
refuses to analyze paths through the function that have side effects
other than those specified.
NOTE(COMPETING_THREADS_NOW)
NOTE(COMPETING_THREADS_NOW)
NOTE(NO_COMPETING_THREADS_NOW)
NOTE(NO_COMPETING_THREADS_NOW)
These two annotations are allowed only inside a function
definition. The first annotation tells LockLint that after this point
in the code, other threads exist that might try to access the same
data that this thread will access. The second function specifies that
this is no longer the case; either no other threads are running or
whatever threads are running will not be accessing data that this
thread will access. While there are no competing threads, LockLint
does not complain if the code accesses data without holding the locks
that ordinarily protect that data.
These annotations are useful in functions that initialize data
without holding locks before starting up any additional threads. Such
functions may access data without holding locks, after waiting for
all other threads to exit. So one might see something like this:
main() { <initialize data structures> NOTE(COMPETING_THREADS_NOW) <create several threads> <wait for all of those threads to exit> NOTE(NO_COMPETING_THREADS_NOW) <look at data structures and print results>}
Note - If a NOTE is present in main(),
LockLint assumes that when main() starts, no other threads
are running. If main() does not include a NOTE,
LockLint does not assume that no other threads are running.
main()
LockLint does not issue a warning if, during analysis, it
encounters a COMPETING_THREADS_NOW annotation when it
already thinks competing threads are present. The condition simply
nests. No warning is issued because the annotation may mean different
things in each use (that is the notion of which threads compete may
differ from one piece of code to the next). On the other hand, a
NO_COMPETING_THREADS_NOW annotation that does not match a
prior COMPETING_THREADS_NOW (explicit or implicit) causes
a warning.
COMPETING_THREADS_NOW
NOTE(NOT_REACHED)
NOTE(NOT_REACHED)
This annotation is allowed only inside a function definition. It
tells LockLint that a particular point in the code cannot be reached,
and therefore LockLint should ignore the condition of locks held at
that point. This annotation need not be used after every call to
exit(), for example, as the lint annotation /*
NOTREACHED */ is used. Simply use it in
definitions for exit() and the like (primarily in LockLint
libraries), and LockLint will determine that code following calls to
such functions is not reached. This annotation should seldom appear
outside LockLint libraries. An example of its use (in a LockLint
library) would be:
exit()
lint
/*
NOTREACHED
*/
exit(int code) { NOTE(NOT_REACHED) }
NOTE(LOCK_ORDER(LockNameList))
NOTE(LOCK_ORDER
This annotation, which is allowed either outside or inside a
function definition, specifies the order in which locks should be
acquired. It is similar to the assert order and order
subcommands. See the command summary at the end of this article.
To avoid deadlocks, LockLint assumes that whenever multiple locks
must be held at once they are always acquired in a well-known order.
If LockLint has been informed of such ordering using this annotation,
an informative message is produced whenever the order is violated.
This annotation may be used multiple times, and the semantics will
be combined appropriately. For example, given the annotations
NOTE(LOCK_ORDER(a b c))
NOTE(LOCK_ORDER(a b c))
NOTE(LOCK_ORDER(b d))
NOTE(LOCK_ORDER(b d))
LockLint will deduce the ordering:
NOTE(LOCK_ORDER(a d))
NOTE(LOCK_ORDER(a d))
It is not possible to deduce anything about the order of c
with respect to d in this example.
d
If a cycle exists in the ordering, an appropriate error message
will be generated.
NOTE(NOW_INVISIBLE_TO_OTHER_THREADS(DataExpr,
...))
NOTE(NOW_INVISIBLE_TO_OTHER_THREADS(
,
...))
NOTE(NOW_VISIBLE_TO_OTHER_THREADS(DataExpr,
...))
NOTE(NOW_VISIBLE_TO_OTHER_THREADS(
These annotations, which are allowed only within a function
definition, tell LockLint whether or not the variables represented by
the specified expressions are visible to other threads; that
is, whether or not other threads could access the variables.
Another common use of these annotations is to inform LockLint that
variables it would ordinarily assume are visible are in fact not
visible, because no other thread has a pointer to them. This
frequently occurs when allocating data off the heap--you can safely
initialize the structure without holding a lock, since no other
thread can yet see the structure.
Foo* p = (Foo*) malloc(sizeof(*p));NOTE(NOW_INVISIBLE_TO_OTHER_THREADS(*p))p->a = bar;p->b = zot;NOTE(NOW_VISIBLE_TO_OTHER_THREADS(*p))add_entry(&global_foo_list, p);
Calling a function never has the side effect of making variables
visible or invisible. Upon return from the function, all changes in
visibility caused by the function are reversed.
NOTE(ASSUMING_PROTECTED(DataExpr, ...))
NOTE(ASSUMING_PROTECTED
This annotation, which is allowed only within a function
definition, tells LockLint that this function assumes that the
variables represented by the specified expressions are protected in
one of the following ways:
The appropriate lock is held for each variable
The variables are invisible to other threads
There are no competing threads when the call is made
LockLint issues an error if none of these conditions is true.
f(Foo* p, Bar* q) { NOTE(ASSUMING_PROTECTED(*p, *q)) p->a++; ...}
LockLint recognizes some assertions as relevant to the state of
threads and locks. (For more information, see the assert man
page.)
Assertions may be made only within a function definition, where a
statement is allowed.
Note - ASSERT() is used in kernel
and driver code, whereas assert() is used in user
(application) code. For simplicity's sake, this document uses
assert() to refer to either one, unless explicitly stated
otherwise.
ASSERT()
assert()
assert(NO_LOCKS_HELD);
assert(NO_LOCKS_HELD);
LockLint recognizes this assertion to mean that, when this point
in the code is reached, no locks should be held by the thread
executing this test. Violations are reported during analysis. A
routine that blocks might want to use such an assertion to ensure
that no locks are held when a thread blocks or exits.
The assertion also clearly serves as a reminder to someone
modifying the code that any locks acquired must be released at that
point.
It is really only necessary to use this assertion in leaf-level
functions that block. If a function blocks only inasmuch as it calls
another function that blocks, the caller need not contain this
assertion as long as the callee does. Therefore this assertion
probably sees its heaviest use in versions of libraries (for example,
libc) written specifically for LockLint (like lint
libraries).
libc
The file synch.h defines NO_LOCKS_HELD as 1
if it has not already been otherwise defined, causing the assertion
to succeed; that is, the assertion is effectively ignored at runtime.
You can override this default runtime meaning by defining
NO_LOCKS_HELD before you include either note.h
or synch.h (which may be included in either order). For
example, if a body of code uses only two locks called a
and b, the following definition would probably suffice:
synch.h
NO_LOCKS_HELD
a
b
#define NO_LOCKS_HELD (!MUTEX_HELD(&a) && !MUTEX_HELD(&b))#include <note.h>#include <synch.h>
Doing so does not affect LockLint's testing of the assertion; that
is, LockLint still complains if any locks are held (not just
a or b).
assert(NO_COMPETING_THREADS);
assert(NO_COMPETING_THREADS);
LockLint recognizes this assertion to mean that, when this point
in the code is reached, no other threads should be competing with the
one running this code. Violations (based on information provided by
certain NOTE-style assertions) are reported during
analysis. Any function that accesses variables without holding their
protecting locks (operating under the assumption that no other
relevant threads are out there touching the same data), should be so
marked.
By default, this assertion is ignored at runtime--that is, it
always succeeds. No generic runtime meaning for NO_COMPETING_THREADS
is possible, since the notion of which threads compete involves
knowledge of the application. For example, a driver might make such
an assertion to say that no other threads are running in this driver
for the same device. Because no generic meaning is possible, synch.h
defines NO_COMPETING_THREADS as 1 if it has not already
been otherwise defined.
NO_COMPETING_THREADS
However, you can override the default meaning for
NO_COMPETING_THREADS by defining it before including
either note.h or synch.h (which may be included
in either order). For example, if the program keeps a count of the
number of running threads in a variable called num_threads,
the following definition might suffice:
num_threads
#define NO_COMPETING_THREADS (num_threads == 1)#include <note.h>#include <synch.h>
Doing so does not affect LockLint's testing of the assertion.
assert(MUTEX_HELD(lock_expr) && ...);
assert(MUTEX_HELD(lock_expr) && ...);
This assertion is widely used within the kernel. It performs
runtime checking if assertions are enabled. The same capability
exists in user code.
This code does roughly the same thing during LockLint analysis as
it does when the code is actually run with assertions enabled; that
is, it reports an error if the executing thread does not hold the
lock as described.
Note - The thread library performs a weaker
test, only checking that some thread holds the lock. LockLint
performs the stronger test.
LockLint recognizes the use of MUTEX_HELD(),
RW_READ_HELD(), RW_WRITE_HELD(), and
RW_LOCK_HELD() macros, and negations thereof. Such macro
calls may be combined using the && operators. For
example, the following assertion causes LockLint to check that a
mutex is not held and that a readers-writer lock is write-held:
MUTEX_HELD()
RW_READ_HELD()
RW_WRITE_HELD()
RW_LOCK_HELD()
&&
assert(p && !MUTEX_HELD(&p->mtx) && RW_WRITE_HELD(&p->rwlock));
LockLint also recognizes expressions like:
MUTEX_HELD(&foo) == 0
TABLE A-1 contains a summary of LockLint
subcommands.
TABLE A-1 LockLint
Subcommands
Subcommand
Effect
analyze
Tests the loaded files for lock inconsistencies; also
validates against assertions
assert
Specifies what LockLint should expect to see regarding
accesses and modifications to locks and variables
declare
Passes information to LockLint that it cannot deduce
disallow
Excludes the specified calling sequence in the analysis
disallows
disallows
Lists the calling sequences that are excluded from the
analysis
files
files
Lists the source code files loaded via the load
subcommand
funcptrs
funcptrs
Lists information about function pointers
funcs
funcs
Lists information about specific functions
Provides information about the specified keyword
ignore
ignore
Excludes the specified functions and variables from analysis
load
load
Specifies the .ll files to be loaded
locks
locks
Lists information about locks
Lists members of the specified struct
order
order
Shows information about the order in which locks are acquired
pointer calls
pointer calls
Lists calls made through function pointers
reallow
reallow
Allows exceptions to the disallow subcommand
reallows
reallows
Lists the calling sequences reallowed through the reallow
subcommand
refresh
refresh
Restores and then saves the latest saved state again
restore
restore
Restores the latest saved state
save
save
Saves the current state on a stack
saves
saves
Lists the states saved on the stack through the save
subcommand
start
start
Starts a LockLint session
sym
sym
Lists the fully qualified names of functions and variables
associated with the specified name
unassert
unassert
Removes some assertions specified through the assert
subcommand
vars
vars
Lists information about variables
Many LockLint subcommands require you to specify names of locks,
variables, pointers, and functions. In C, it is possible for names to
be ambiguous. See LockLint
Naming Conventions for details on specifying names to LockLint
subcommands.
TABLE A-2 lists the exit
status values of LockLint subcommands.
TABLE A-2 Exit Status
Values of LockLint Subcommands
Value
0
Normal
1
System error
2
User error, such as incorrect options or undefined name
3
Multiple errors
5
LockLint detected error: violation of an assertion, potential
data race or deadlock may have been found, unprotected data
references, and so on.
10
Licensing error
Many LockLint subcommands require you to specify names of locks,
variables, pointers, and functions. In C, it is possible for names to
be ambiguous; for example, there may be several variables named foo,
one of them extern and others static.
extern
static
The C language does not provide a way of referring to ambiguously
named variables that are hidden by the scoping rules. In LockLint,
however, a way of referring to such variables is needed. Therefore,
every symbol in the code being analyzed is given a formal name, a
name that LockLint uses when referring to the symbol. Table A-3 lists
some examples of formal names for a function.
TABLE A-3 Sample Formal
Function Names
Formal Name
Definition
:func
:func
extern function
extern function
file:func
file:func
static function
static function
Table A-4 lists the formal names for a variable, depending on its
use as a lock, a pointer, or an actual variable.
TABLE A-4 Sample Formal
Variable Names
:var
:var
extern variable
file:var
file:var
static variable with file scope
:func/var
:func/var
Variable defined in an extern function
file:func/var
file:func/var
Variable defined in a static function
tag::mbr
tag::mbr
Member of an unnamed struct
file@line::mbr
file@line::mbr
Member of an unnamed, untagged struct
In addition, any of these may be followed by an arbitrary number
of .mbr specifications to denote members of a structure.
.mbr
Table A-5 contains some examples of the LockLint naming scheme.
TABLE A-5 LockLint Naming
Scheme Examples
Example
:bar
:bar
External variable or function bar
:main/bar
:main/bar
static variable bar that is defined
within extern function main
main
zot.c:foo/bar.zot
zot.c:foo/bar.zot
Member zot of static variable bar, which
is defined within static function foo in file zot.c
zot.c
foo::bar.zot.bim
foo::bar.zot.bim
Member bim of member zot of member bar
of a struct with tag foo, where no name is
associated with that instance of the struct (it was
accessed through a pointer)
bim
While LockLint refers to symbols in this way, you are not
required to. You may use as little of the name as is required to
unambiguously identify it. For example, you could refer to
zot.c:foo/bar as foo/bar as long as there is
only one function foo defining a variable bar.
You can even refer to it simply as bar as long as there is
no other variable by that name.
zot.c:foo/bar
foo/bar
C allows the programmer to declare a structure without assigning
it a tag. When you use a pointer to such a structure, LockLint must
make up a tag by which to refer to the structure. It generates a tag
of the format filename@line_number. For example, if you
declare a structure without a tag at line 42 of file foo.c,
and then refer to member bar of an instance of that
structure using a pointer, as in:
foo.c
typedef struct { ... } foo;foo *p;func1() { p->bar = 0; }
LockLint sees that as a reference to foo.c@42::bar.
foo.c@42::bar
Because members of a union share the same memory
location, LockLint treats all members of a union as the
same variable. This is accomplished by using a member name of %
regardless of which member is accessed. Since bit fields typically
involve sharing of memory between variables, they are handled
similarly: % is used in place of the bit field member
name.
When you list locks and variables, you are only seeing those locks
and variables that are actually used within the code represented by
the .ll files. No information is available from LockLint
on locks, variables, pointers, and functions that are declared but
not used. Likewise, no information is available for accesses through
pointers to simple types, such as this one:
int *ip = &i;*ip = 0;
When simple names (for example, foo) are used, there is
the possibility of conflict with keywords in the subcommand language.
Such conflicts can be resolved by surrounding the word with double
quotes, but remember that you are typing commands to a shell, and
shells typically consume the outermost layer of quotes. Therefore you
have to escape the quotes, as in this example:
% lock_lint ignore foo in func \"func\"
lock_lint ignore foo in func \"func\"
If two files with the same base name are included in an analysis,
and these two files contain static variables by the same
name, confusion can result. LockLint thinks the two variables are the
same.
If you duplicate the definition for a struct with no
tag, LockLint does not recognize the definitions as the same struct.
The problem is that LockLint makes up a tag based on the file and
line number where the struct is defined (such as x.c@24),
and that tag differs for the two copies of the definition.
x.c@24
If a function contains multiple automatic variables of the same
name, LockLint cannot tell them apart. Because LockLint ignores
automatic variables except when they are used as function pointers,
this does not come up often. In the following code, for example,
LockLint uses the name :foo/fp for both function pointers:
:foo/fp
int foo(void (*fp)()) { (*fp)(); { void (*fp)() = get_func(); (*fp)(); ...
Some of these are equivalent to subcommands such as assert.
Source code annotations are often preferable to subcommands, because
they
Have finer granularity
Are easy to maintain
Serve as comments on the code in question
analyze [-hv]
Analyzes the loaded files for lock inconsistencies that may lead
to data races and deadlocks. This subcommand may produce a great deal
of output, so you may want to redirect the output to a file. This
subcommand can be run only once for each saved state.
-h (history) produces detailed information for each
phase of the analysis. No additional errors are issued.
-h
-v (verbose) generates additional messages during
analysis:
-v
Writable variable read while no locks held!
Writable variable read while no locks held!
Variable written while no locks held!
Variable written while no locks held!
No lock consistently held while accessing variable!
No lock consistently held while accessing variable!
Output from the analyze subcommand can be particularly
abundant if:
The code has not been analyzed before
The assert read only subcommand was not used to
identify read-only variables
assert read only
No assertions were made about the protection of writable
variables
The output messages are likely to reflect situations that are not
real problems; therefore, it is often helpful to first analyze the
code without the -v option, to show only the messages that
are likely to represent real problems.
Each problem encountered during analysis is reported on one or
more lines, the first of which begins with an asterisk. Where
possible, LockLint provides a complete traceback of the calls taken
to arrive at the point of the problem. The analysis goes through the
following phases:
Checking for functions with variable side effects on locks
If a disallow sequence specifies that a function
with locking side effects should not be analyzed, LockLint produces
incorrect results. If such disallow sequences are found,
they are reported and analysis does not proceed.
Preparing locks to hold order info LockLint
processes the asserted lock order information available to it. If
LockLint detects a cycle in the asserted lock order, the cycle is
reported as an error.
Checking for function pointers with no targets LockLint
cannot always deduce assignments to function pointers. During this
phase, LockLint reports any function pointer for which it does not
think there is at least one target, whether deduced from the source
or declared a func.ptr target.
func.ptr
Removing accesses to ignored variables To improve
performance, LockLint removes references to ignored variables at
this point. (This affects the output of the vars
subcommands.)
Preparing functions for analysis During this phase,
LockLint determines what side effects each function has on locks.
(A side effect is a change in a lock's state that is not
reversed before returning.) An error results if:
The side effects do not match what LockLint expects
The side effects are different depending upon the path
taken through the function
A function with such side effects is recursive
LockLint expects that a function will have no side effects on
locks, except where side effects have been added using the assert
side effect subcommand.
assert
side effect
Preparing to recognize calling sequences to allow/disallow
subcommands that were issued, if any. No errors or warnings are
reported.Here, LockLint is processing the various
allow/disallow subcommands that were issued, if any. No
errors or warnings are reported.
allow/disallow
Checking locking side effects in function pointer targets
Calls through function pointers may target several
functions. All functions that are targets of a particular function
pointer must have the same side effects on locks (if any). If a
function pointer has targets that differ in their side effects,
analysis does not proceed.
Checking for consistent use of locks with condition
variables
Here LockLint checks that all waits on a particular condition
variable use the same mutex. Also, if you assert that particular
lock to protect that condition variable, LockLint makes sure you
use that lock when waiting on the condition variable.
Determining locks consistently held when each function is
entered
During this phase, LockLint reports violations of assertions
that locks should be held upon entry to a function (see assert
subcommand). Errors such as locking a mutex lock that is already
held, or releasing a lock that is not held, are also reported.
Locking an anonymous lock, such as foo::lock, more than
once is not considered an error, unless the declare one
command has been used to indicate otherwise.
declare one
Once the analysis is done, you can find still more potential
problems in the output of the vars and order
subcommands.
assert has the following syntax:
assert side effect
side
effect
mutex
acquired in
func ...
rwlock [read]
lock
released in
...
rwlock
upgraded in
downgraded in
assert mutex|rwlock
protects
var ...
assert mutex
assert rwlock
[reads in]
assert order
lock lock ...
assert read only
covers
lock ...
These subcommands tell LockLint how the programmer expects locks
and variables to be accessed and modified in the application being
checked. During analysis any violations of such assertions are
reported.
Note - If a variable is asserted more than
once, only the last assert takes effect.
side effect is a change made by a function in the
state of a lock, a change that is not reversed before the function
returns. If a function contains locking side effects and no
assertion is made about the side effects, or the side effects
differ from those that are asserted, a warning is issued during the
analysis. The analysis then continues as if the unexpected side
effect never occurred.
Note - There is another kind of side effect
called an inversion. See the locks or
funcs subcommands for more details.
locks
Warnings are also issued if the side effects produced by a
function could differ from call to call (for example, conditional
side effects). The keywords acquired in, released
in, upgraded in, and downgraded in
describe the type of locking side effect being asserted about the
function. The keywords correspond to the side effects available via
the threads library interfaces and the DDI and DKI Kernel Functions
(see mutex(3T), rwlock(3T), mutex(9F) and
rwlock(9F)).
acquired in
released
in
upgraded in
downgraded in
The side effect assertion for rwlocks takes an
optional argument read; if read is present,
the side effect is that the function acquires read-level access for
that lock. If read is not present, the side effect
specifies that the function acquires write-level access for that
lock.
rwlocks
read
assert
|
protects
Asserting that a mutex lock protects a variable causes an error
whenever the variable is accessed without holding the mutex lock.
Asserting that a readers-writer lock protects a variable causes an
error whenever the variable is read without holding the lock for
read access or written without holding the lock for write access.
Subsequent assertions as to which lock protects a variable override
any previous assertions; that is, only the last lock asserted to
protect a variable is used during analysis.
protects
Asserting that a mutex lock protects a function causes an error
whenever the function is called without holding the lock. For root
functions, the analysis is performed as if the root function were
called with this assertion being true.
Asserting that a readers-writer lock protects a function causes
an error whenever the function is called without holding the lock
for write access. Asserting that a readers-writer lock protects
reads in a function causes an error whenever the function is called
without holding the lock for read access. For root functions, the
analysis is performed as if the root function were called with this
assertion being true.
Note - To avoid flooding the output with
too many violations of a single assert... protects
subcommand, a maximum of 20 violations of any given assertion is
shown. This limit does not apply to the assert order
subcommand.
assert... protects
Informs LockLint of the order in which locks should be acquired.
That is, LockLint assumes that the program avoids deadlocks by
adhering to a well-known lock order. Using this subcommand, you can
make LockLint aware of the intended order so that violations of the
order can be printed during analysis.
States that the given set of variables should never be written
by the application; LockLint reports any writes to the variables.
Unless a variable is read-only, reading the variable while no locks
are held will elicit an error since LockLint assumes that the
variable could be written by another thread at the same time.
covers
Informs LockLint of the existence of a hierarchical locking
relationship. A readers-writer lock may be used in conjunction with
other locks (mutex or readers-writer) in the following way to
increase performance in certain situations:
, must be held while any of a set of other covered locks is
held. That is, it is illegal (under these conventions) to hold a
covered lock while not also holding the cover, with at least read
access.
While holding the cover for write access, you can access
any variable protected by one of the covered locks without holding
the covered lock. This works because it is impossible for another
thread to hold the covered lock (since it would also have to be
holding the cover). The time saved by not locking the covered
locks can increase performance if there is not excessive
contention over the cover.
Using assert rwlock covers prevents
LockLint from issuing error messages when a thread accesses
variables while holding the cover for write access but not the
covered lock. It also enables checks to ensure that a covered lock
is never held when its cover is not.
covers
declare has the following syntax:
mutex
mutex . . .
rwlocks
rwlock ...
func_ptr
targets
targets
...
nonreturning
nonreturning
one
tag ...
readable
readable
root
root
These subcommands tell LockLint things that it cannot deduce
from the source presented to it.
declare mutex
declare rwlocks
These subcommands (along with declare root, below)
are typically used when analyzing libraries without a supporting
harness. The subcommands declare mutex and declare
rwlocks create mutex and reader-writer locks of the given
names. These symbols can be used in subsequent assert
subcommands.
declare mutex
declare
rwlocks
Adds the specified functions to the list of functions that could
be called through the specified function pointer.
LockLint manages to gather a good deal of information about
function pointer targets on its own by watching initialization and
assignments. For example, for the code
struct foo { int (*fp)(); } foo1 = { bar };
LockLint does the equivalent of the command
% lock_lint declare foo::fp targets bar
% lock_lint declare foo1.fp targets bar
declare nonreturning
Tells LockLint that the specified functions do not return.
LockLint will not give errors about lock state after calls to such
functions.
Tells LockLint that only one unnamed instance exists of each
structure whose tag is specified. This knowledge makes it possible
for LockLint to give an error if a lock in that structure is
acquired multiple times without being released. Without this
knowledge, LockLint does not complain about multiple acquisitions
of anonymous locks (for example, foo::lock), since two
different instances of the structure could be involved.
declare readable
Tells LockLint that the specified variables may be safely read
without holding any lock, thus suppressing the errors that would
ordinarily occur for such unprotected reads.
Tells LockLint to analyze the given functions as a root
function; by default, if a function is called from any other
function, LockLint does not attempt to analyze that function as the
root of a calling sequence.
A root function is a starting point for the analysis;
functions that are not called from within the loaded files are
naturally roots. This includes, for example, functions that are
never called directly but are the initial starting point of a
thread (for example, the target function of a thread_create
call). However, a function that is called from within the
loaded files might also be called from outside the loaded files, in
which case you should use this subcommand to tell LockLint to use
the function as a starting point in the analysis.
thread_create
disallow has the following syntax:
disallow func ...
disallow func
Tells LockLint that the specified calling sequence should not be
analyzed. For example, to prevent LockLint from analyzing any
calling sequence in which f() calls g() calls
h(), use the subcommand
f()
g()
h()
% lock_lint disallow f g h
Function pointers can make a program appear to follow many
calling sequences that do not in practice occur. Bogus locking
problems, particularly deadlocks, can appear in such sequences.
disallow prevents LockLint from following such sequences.
disallows has the following syntax:
Lists the calling sequences that are disallowed by the disallow
subcommand.
There is no exit subcommand for LockLint. To exit LockLint, use
the exit command for the shell you are using.
files has the following syntax:
Lists the .ll versions of the source code files
loaded with the load subcommand.
funcptrs has the following syntax:
funcptrs [-botu] func_ptr ... funcptrs [-blotuz]
funcptrs [-botu]
funcptrs [-blotuz
Lists information about the function pointers used in the loaded
files. One line is produced for each function pointer.
TABLE A-6 funcptrs
Options
funcptrs
Option
-b
-b
(bound) This option lists only function pointers to
which function targets have been bound, that is it suppresses
the display of function pointers for which there are no bound
targets.
-l
-l
(long) Equivalent to -ot.
-ot
-o
-o
(other) This presents the following information about
each function pointer:
Calls=#
Calls=#
Indicates the number of places in the loaded files this
function pointer is used to call a function.
=nonreturning
=nonreturning
Indicates that a call through this function pointer never
returns (none of the functions targeted ever return).
-t
-t
(targets) This option lists the functions currently
bound as targets to each function pointer listed, as follows:
targets={ func ... }
func ...
-u
-u
(unbound) This lists only those function pointers to
which no function targets are bound. That is, suppresses the
display of function pointers for which there are bound targets.
-z
(zero) This lists function pointers for which there
are no calls. Without this option information is given only on
function pointers through which calls are made.
You can combine various options to funcptrs:
This example lists information about the specified function
pointers. By default, this variant of the subcommand gives all the
details about the function pointers, as if -ot had been
specified.
funcptrs [-botu] func_ptr ...
func_ptr
This example lists information about all function pointers
through which calls are made. If -z is used, even
function pointers through which no calls are made are listed.
-z
funcptrs [-blotuz]
has the following syntax:
funcs [-adehou]
funcs [-adehou]
funcs [-adehilou]
funcs [-adehilou]
[directly]
funcs [-adehlou]
funcs [-adehlou]
called by
calling
reading
var ....
writing
uncs [-adehlou]
accessing
affecting
inverting
funcs lists information about the functions defined
and called in the loaded files. Exactly one line is printed for
each function.
TABLE A-7 funcs
Options
funcs
-a
(asserts) This option shows information about which
locks are supposed to be held on entry to each function, as set
by the assert subcommand. When such assertions have
been made, they show as:
asserts={ lock ... }
read_asserts={ lock ... }
An asterisk appears before the name of any lock that was not
consistently held upon entry (after analysis).
-e
-e
(effects) This option shows information about the
side effects each function has on locks (for example, "acquires
mutex lock foo"). If a function has such side effects,
they are shown as:
side_effects={ effect [, effect] ... }
Using this option prior to analysis shows side effects
asserted by an assert side effect
subcommand. After analysis, information on side effects
discovered during the analysis is also shown.
-d
-d
(defined) This option shows only those functions that
are defined in the loaded files. That is, that it
suppresses the display of undefined functions.
-h
(held) This option shows information about which
locks were consistently held when the function was called
(after analysis). Locks consistently held for read (or write)
on entry show as:
held={ lock ... }+{ lock ... }
read_held={ lock ... }+{ lock ... }
The first list in each set is the list of locks consistently
held when the function was called; the second is a list of
inconsistently held locks--locks that were sometimes
held when the function was called, but not every time.
-i
-i
(ignored) This option lists ignored functions.
(long) Equivalent to -aeoh.
-aeoh
(other) This option causes LockLint to present, where
applicable, the following information about each function
=ignored
=ignored
Indicates that LockLint has been told to ignore the function
using the ignore subcommand.
Indicates that a call through this function never returns
(none of the functions targeted ever return).
=rooted
=rooted
Indicates that the function was made a root using the
declare root subcommand.
=root
=root
Indicates that the function is naturally a root (is not
called by any function).
=recursive
=recursive
Indicates that the function makes a call to itself.
=unanalyzed
=unanalyzed
Indicates that the function was never called during analysis
(and is therefore unanalyzed). This differs from =root
in that this can happen when foo calls bar
and bar calls foo, and no other function
calls either foo or bar, and neither have
been rooted (see =rooted). So, because foo
and bar are not roots, and they can never be reached from any
root function, they have not been analyzed.
calls=#
calls=#
Indicates the number of places in the source code, as
represented by the loaded files, where this function is called.
These calls may not actually be analyzed; for example, a
disallow subcommand may prevent a call from ever
really taking place.
(undefined) This option shows only those functions
that are undefined in the loaded files.
funcs [-adehou]
Lists information about individual functions. By default, this
variant of the subcommand gives all the details about the
functions, as if -aeho had been specified.
-aeho
Lists information about all functions that are not ignored. If
-i is used, even ignored functions are listed.
funcs [-adehlou] [directly] called by
Lists only those functions that may be called as a result of
calling the specified functions. If directly is used,
only those functions called by the specified functions are listed.
If directly is not used, any functions those
functions called are also listed, and so on.
directly
funcs [-adehlou] [directly] calling
Lists only those functions that, when called, may result in one
or more of the specified functions being called. See notes below on
directly.
funcs [-adehlou] [directly] reading
Lists only those functions that, when called, may result in one
or more of the specified variables being read. See notes below on
directly.
funcs [-adehlou] [directly] writing
Lists only those functions that, when called, may result in one
or more of the specified variables being written. See notes below
on directly.
funcs [-adehlou] [directly] accessing
Lists only those functions that, when called, may result in one
or more of the specified variables being accessed (read or
written). See notes below on directly.
funcs [-adehlou] [directly] affecting
Lists only those functions that, when called, may result in one
or more of the specified locks being affected (acquired, released,
upgraded, or downgraded). See notes below on directly.
funcs [-adehlou] [directly] inverting
Lists only those functions that invert one or more of the
specified locks. If directly is used, only those
functions that themselves invert one or more of the locks (actually
release them) are listed. If directly is not used, any
function that is called with a lock already held, and then calls
another function that inverts the lock, is also listed, and so on.
For example, in the following code, f3() directly
inverts lock m, and f2() indirectly inverts
it:
f3()
m
f2()
f1() { pthread_mutex_unlock(&m); f2(); pthread_mutex_lock(&m); }f2() { f3(); }f3() { pthread_mutex_unlock(&m); pthread_mutex_lock(&m); }
Except where stated otherwise, variants that allow the keyword
directly only list the functions that themselves
fit the description. If directly is not used, all the
functions that call those functions are listed, and any functions
that call those functions, and so on.
help [keyword]
Without a keyword, help displays the subcommand set.
With a keyword, help gives helpful information
relating to the specified keyword. The keyword may be the first
word of any LockLint subcommand. There are also a few other
keywords for which help is available:
condvarslockingexamplemakefileifdefnamesinversionsoverviewlimitationsshell
condvars
locking
example
makefile
ifdef
names
inversions
overview
limitations
shell
If environment variable PAGER is set, that program is
used as the pager for help. If PAGER is not
set, more is used.
PAGER
more
ignore func|var ... [ in func ... ]
func|var ... [
Tells LockLint to exclude certain functions and variables from
the analysis. This exclusion may be limited to specific functions
using the in func ... clause; otherwise the
exclusion applies to all functions.
in
The commands
% lock_lint funcs -io | grep =ignored% lock_lint vars -io | grep =ignored
lock_lint funcs -io | grep =ignored
% lock_lint vars -io | grep =ignored
show which functions and variables are ignored.
load file ...
Loads the specified .ll files. The extension may be
omitted, but if an extension is specified, it must be .ll.
Absolute and relative paths are allowed. You are talking to a
shell, so the following are perfectly legal (depending upon your
shell's capabilities):
% lock_lint load *.ll% lock_lint load ../foo/abcdef{1,2}% lock_lint load `find . -name \*.ll -print`
lock_lint load *.ll
lock_lint load ../foo/abcdef{1,2}
lock_lint load `find . -name \*.ll -print`
The text for load is changed extensively. To set the new text,
type:
% lock_lint help load
lock_lint help load
locks [-co] lock ...locks [-col]locks [-col] [directly] affected by func ...locks [-col] [directly] inverted by func ...
locks [-co]
locks [-col]
locks [-col] [directly] affected by func ...
locks [-col] [directly] inverted by func ...
Lists information about the locks of the loaded files. Only
those variables that are actually used in lock manipulation
routines are shown; locks that are simply declared but never
manipulated are not shown.
TABLE A-8 locks
Options
locks
-c
-c
(cover) This option shows information about lock
hierarchies. Such relationships are described using the assert
rwlock covers subcommand. (When locks are
arranged in such a hierarchy, the covering lock must be held,
at least for read access, whenever any of the covered locks is
held. While holding the covering lock for write access, it is
unnecessary to acquire any of the covered locks.) If a lock
covers other locks, those locks show as
covered={ lock ... }
If a lock is covered by another lock, the covering lock
shows as
cover=lock
(long) Equivalent to -co.
-co
(other) Causes the type of the lock to be shown as
(type) where type is mutex, rwlock,
or ambiguous type [used as a mutex in some places and as
a rwlock (readers-writer) in other places].
Lists information about individual locks. By default, this
variant of the subcommand gives all the details about the locks, as
if -co had been specified.
Lists information about all locks.
locks [-col] [directly] affected by.
f1
m1
m2
f1() { pthread_mutex_unlock(&m1); f2(); pthread_mutex_lock(&m1); }f2() { f3(); }f3() { pthread_mutex_unlock(&m2); pthread_mutex_lock(&m2); }
members struct_tag
Lists the members of the struct with the specified
tag, one per line. For structures that were not assigned a tag, the
notation file@line is used (for example, x.c@29),
where the file and line number are the source location of the
struct declaration.
x.c@29
members is particularly useful to use as input to
other LockLint subcommands. For example, when trying to assert that
a lock protects all the members of a struct, the
following command suffices:
% lock_lint assert foo::lock protects `lock_lint members foo`
lock_lint assert foo::lock protects `lock_lint members foo`
Note - The members subcommand
does not list any fields of the struct that are
defined to be of type mutex_t, rwlock_t,
krwlock_t, or kmutex_t.
mutex_t
rwlock_t
krwlock_t
kmutex_t
order [lock [lock]]order summary
The order subcommand lists information about the
order in which locks are acquired by the code being analyzed. It
may be run only after the analyze subcommand.
order [
[
]]
Shows the details about lock pairs. For example, the command
% lock_lint order foo bar
lock_lint order foo bar
shows whether an attempt was made to acquire lock bar
while holding lock foo. The output looks something like
the following:
:foo :bar seen (first never write-held), valid
First the output tells whether such an attempt actually occurred
(seen or unseen). If the attempt occurred,
but never with one or both of the locks write-held, a parenthetical
message to that effect appears, as shown. In this case, foo
was never write-held while acquiring bar.
seen
unseen
:f :e :d :g :a:f :c :g :a
In this example, there are two orders because there is not
enough information to allow locks e and d to
be ordered with respect to lock c.
e
Some cycles are shown, while others are not. For example,
:a :b :c :b
:a :b :c :b
is shown, but
:a :b :c :a
:a :b :c :a
(where no other lock is ever held while trying to acquire one of
these) is not. Deadlock information from the analysis is still
reported.
Lists calls made through function pointers in the loaded files.
Each call is shown as:
function [location of call] calls through funcptr func_ptr
For example,
foo.c:func1 [foo.c,84] calls through funcptr bar::read
means that at line 84 of foo.c, in func1
of foo.c, the function pointer bar::read
(member read of a pointer to struct of type
bar) is used to call a function.
bar::read
reallow func ...
Allows you to make exceptions to disallow
subcommands. For example, to prevent LockLint from analyzing any
calling sequence in which f() calls g() calls
h(), except when f() is called by e()
which was called by d(), use the commands
e()
d()
% lock_lint disallow f g h% lock_lint reallow d e f g h
lock_lint disallow f g h
lock_lint reallow d e f g h
In some cases you may want to state that a function should only
be called from a particular function, as in this example:
% lock_lint disallow f% lock_lint reallow e f
lock_lint disallow f
lock_lint reallow e f
Note - A reallow subcommand only
suppresses the effect of a disallow subcommand if the
sequences end the same. For example, after the following
commands, the sequence d e f g h would still be
disallowed:
d e f g h
% lock_lint disallow e f g h% lock_lint reallow d e f g
lock_lint disallow e f g h
lock_lint reallow d e f g
Lists the calling sequences that are reallowed, as specified
using the reallow subcommand.
Pops the saved state stack, restoring LockLint to the state of
the top of the saved-state stack, prints the description, if any,
associated with that state, and saves the state again. Equivalent
to restore followed by save..
save description:
%: lock_lint load *.ll%: lock_lint save Before Analysis%: lock_lint analyze <output from analyze>%: lock_lint vars -h | grep \* <apparent members of struct foo are not consistently protected>%: lock_lint refresh Before Analysis%: lock_lint assert lock1 protects `lock_lint members foo`%: lock_lint analyze <output now contains info about where the assertion is violated>
lock_lint save Before Analysis
lock_lint analyze
lock_lint vars -h | grep \*
lock_lint refresh Before Analysis
lock_lint assert lock1 protects `lock_lint members foo`
saves.
start [cmd]
which contains the path to the temporary directory of files used
to maintain a LockLint session.
cmd specifies a command and its path and options. By
default, if cmd is not specified, the value of $SHELL
is used.
Note - To exit a LockLint session use the
exit command of the shell you are using.
Start
The following examples show variations of the start
subcommand.
% lock_lint start
lock_lint start
LockLint's context is established and LL_CONTEXT is
set. Then the program identified by $SHELL is executed.
Normally, this is your default shell. LockLint subcommands can now
be entered. Upon exiting the shell, the LockLint context is
removed.
LL_CONTEXT
% lock_lint start foo
lock_lint start foo.
/bin/csh
-c foo
/bin/csh
If you use a shell script to start LockLint, insert #! in
the first line of the script to define the name of the interpreter that
processes that script. For example, to specify the C-shell the first
line of the script is:
#!
#! /bin/csh
#! /bin/csh
In this case, the user starts LockLint with the Korn shell:
lock_lint start /bin/ksh
After establishing the LockLint context and setting LL_CONTEXT,
the command /bin/ksh is executed. This results in the
user interacting with an interactive Korn shell. Upon exiting the
Korn shell, the LockLint context is removed.
/bin/ksh
sym
sym name ...
Lists the fully qualified names of various things the specified
names could refer to within the loaded files. For example, foo
might refer both to variable x.c:func1/foo and to
function y.c:foo, depending on context.
x.c:func1/foo
y.c:foo
unassert
unassert vars var ...
Undoes any assertion about locks protecting the specified
variables. There is no way to remove an assertion about a lock
protecting a function.
vars [-aho] var ...vars [-ahilo]vars [-ahlo] protected by lockvars [-ahlo] [directly] read by func ...vars [-ahlo] [directly] written by func ...vars [-ahlo] [directly] accessed by func ...
Lists information about the variables of the loaded files. Only
those variables that are actually used are shown; variables
that are simply declared in the program but never accessed are not
shown.
TABLE A-9 vars
Options
vars
Option
-a
(assert) Shows information about which lock is
supposed to protect each variable, as specified by the assert
mutex|rwlock protects subcommand. The
information is shown as follows
protects
assert=lock
If the assertion is violated, then after analysis this will be
preceded by an asterisk, such as *assert=<lock>.
*assert=<
>
-h
(held) Shows information about which locks were
consistently held when the variable was accessed. This
information is shown after the analyze subcommand
has been run. If the variable was never accessed, this
information is not shown. When it is shown, it looks like this:
held={ <lock> ... }
held={ <
If no locks were consistently held and the variable was
written, this is preceded by an asterisk, such as *held={
}. Unlike funcs, the vars
subcommand lists a lock as protecting a variable even if the
lock was not actually held, but was simply covered by another
lock.
*held={
}
(ignored) causes even ignored variables to be
listed.
(long) Equivalent to -aho.
-aho
(other) Where applicable, shows information about
each variable
=cond_var
=cond_var
Indicates that this variable is used as a condition
variable.
Indicates that LockLint has been told to ignore the variable
explicitly via an ignore subcommand.
=read-only
=read-only
Means that LockLint has been told (by assert read only)
that the variable is read-only, and will complain if it is
written. If it is written, then after analysis this will be
followed by an asterisk, such as =read-only* for
example.
=read-only*
=readable
=readable
Indicates that LockLint has been told by a declare
readable subcommand that the variable may be safely read
without holding a lock.
declare
readable
=unwritten
=unwritten
May appear after analysis, meaning that while the variable
was not declared read-only, it was never written.
vars [-aho]
Lists only those variables that are protected by the specified
lock. This subcommand may be run only after the analyze
subcommand has been run.
vars [-ahlo] [directly] read by
Lists only those variables that may be read as a result of
calling the specified functions. See notes below on directly.
vars [-ahlo] [directly] written by
Lists only those variables that may be written as a result of
calling the specified functions. See notes below on directly.
vars [-ahlo] [directly] accessed by
Lists only those variables that may be accessed (read or
written) as a result of calling the specified functions.:
foo() { pthread_mutex_unlock(&mtx); ... pthread_mutex_lock(&mtx);}):
foo()
mtx
zort_list
NULL
ZORT* zort_list; /* VARIABLES PROTECTED BY mtx: zort_list */void f() { pthread_mutex_lock(&mtx); if (zort_list == NULL) {/* trying to be careful here */ pthread_mutex_unlock(&mtx); return; } foo(); zort_list->count++; /* but zort_list may be NULL here!! */ pthread_mutex_unlock(&mtx);}
Lock inversions may be found using the commands:
% lock_lint funcs [directly] inverting lock ...% lock_lint locks [directly] inverted by func ...
lock_lint funcs [directly] inverting
lock_lint locks [directly] inverted by
An interesting question to ask is "Which functions acquire
locks that then get inverted by calls they make?" That is,
which functions are in danger of having stale data? The following
(Bourne shell) code can answer this question:
$ LOCKS=`lock_lint locks`$ lock_lint funcs calling `lock_lint funcs inverting $LOCKS`
LOCKS=`lock_lint locks`
lock_lint funcs calling `lock_lint funcs inverting $LOCKS`
The following gives similar output, separated by lock:
for lock in `lock_lint locks`do echo "functions endangered by inversions of lock $lock" lock_lint funcs calling `lock_lint funcs inverting $lock`done
|
http://developers.sun.com/solaris/articles/locklint.html
|
crawl-002
|
en
|
refinedweb
|
void swap ( set<Key,Compare,Allocator>& st );
Swap content
Exchanges the content of the container with the content of st, which is another set object containing elements of the same type. Sizes may differ.After the call to this member function, the elements in this container are those which were in st before the call, and the elements of st are those which were in this. All iterators, references and pointers remain valid for the swapped objects.Notice that a global algorithm function exists with this same name, swap, and the same behavior.
// swap sets
#include <iostream>
#include <set>
using namespace std;
main ()
{
int myints[]={12,75,10,32,20,25};
set<int> first (myints,myints+3); // 10,12,75
set<int> second (myints+3,myints+6); // 20,25,32
set<int>::iterator it;
first.swap(second);
cout << "first contains:";
for (it=first.begin(); it!=first.end(); it++) cout << " " << *it;
cout << "\nsecond contains:";
for (it=second.begin(); it!=second.end(); it++) cout << " " << *it;
cout << endl;
return 0;
}
first contains: 20 25 32second contains: 10 12 75
|
http://www.cplusplus.com/reference/stl/set/swap/
|
crawl-002
|
en
|
refinedweb
|
nose-testconfig 0.4
Test Configuration plugin for nosetests.
- Project hosting: <>
About
Written by Jesse Noller Licensed under the Apache Software License, 2.0
You can install it with easy_install nose-testconfig
What It Does
nose-testconfig is a plugin to the nose test framework which provides a faculty for passing test-specific (or test-run specific) configuration data to the tests being executed.
Currently configuration files in the following formats are supported:
- YAML (via PyYAML)
- INI (via ConfigParser)
- Pure Python (via Exec)
The plugin is meant to be flexible, ergo the support of exec'ing arbitrary python files as configuration files with no checks. The default format is assumed to be ConfigParser ini-style format.
The plugin provides a method of overriding certain parameters from the command line (assuming that the main "config" object is a dict) and can easily have additional parsers added to it.
Test Usage
For now (until something better comes along) tests can import the "config" singleton from testconfig:
from testconfig import config
By default, YAML files parse into a nested dictionary, and ConfigParser ini files are also collapsed into a nested dictionary for foo[bar][baz] style access. Tests can obviously access configuration data by referencing the relevant dictionary keys:
from testconfig import config def test_foo(): target_server_ip = config['servers']['webapp_ip']
Warning: Given this is just a dictionary singleton, tests can easily write into the configuration. This means that your tests can write into the config space and possibly alter it. This also means that threaded access into the configuration can be interesting.
When using pure python configuration - obviously the "sky is the the limit" - given that the configuration is loaded via an exec, you could potentially modify nose, the plugin, etc. However, if you do not export a config{} dict as part of your python code, you obviously won't be able to import the config object from testconfig.
When using YAML-style configuration, you get a lot of the power of pure python without the danger of unprotected exec() - you can obviously use the pyaml python-specific objects and all of the other YAML creamy goodness.
Defining a configuration file
Simple ConfigParser style:
[myapp_servers] main_server = 10.1.1.1 secondary_server = 10.1.1.2
So your tests access the config options like this:
from testconfig import config def test_foo(): main_server = config['myapp_servers']['main_server']
- YAML style configuration::
- myapp:
- servers:
- main_server: 10.1.1.1 secondary_server: 10.1.1.2
And your tests can access it thus:
from testconfig import config def test_foo(): main_server = config['myapp']['servers']['main_server']
Python configuration file:
import socket global config config = {} possible_main_servers = ['10.1.1.1', '10.1.1.2'] for srv in possible_main_servers: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((srv, 80)) except: continue s.close() config['main_server'] = srv break
And lo, the config is thus:
from testconfig import config def test_foo(): main_server = config['main_server']
If you need to put python code into your configuration, you either need to use the python-config file faculties, or you need to use the !!python tags within PyYAML/YAML - raw ini files no longer have any sort of eval magic.
Command line options
After it is installed, the plugin adds the following command line flags to nosetests:
--tc-file=TESTCONFIG Configuration file to parse and pass to tests [NOSE_TEST_CONFIG_FILE] --tc-format=TESTCONFIGFORMAT Test config file format, default is configparser ini format [NOSE_TEST_CONFIG_FILE_FORMAT] --tc=OVERRIDES Option:Value specific overrides. --tc-exact Optional: Do not explode periods in override keys to individual keys within the config dict, instead treat them as config[my.toplevel.key] ala sqlalchemy.url in pylons.
Passing in an INI configuration file:
$ nosetests -s --tc-file example_cfg.ini
Passing in a YAML configuration file:
$ nosetests -s --tc-file example_cfg.yaml --tc-format yaml
Passing in a Python configuration file:
$ nosetests -s --tc-file example_cfg.py --tc-format python
Overriding a configuration value on the command line:
$ nosetests -s --tc-file example_cfg.ini --tc=myvalue.sub:bar
Overriding multiple key:value pairs:
$ nosetests -s --tc-file example_cfg.ini --tc=myvalue.sub:bar \ --tc=myvalue.sub2:baz --tc=myvalue.sub3:bar3
Warning: When using the --tc= flag, you can pass it in as many times as you want to override as many keys/values as needed, however you can not use it to add in new keys: The configuration key must already be defined. The format is in parent.child.child = value format - the periods are translated into keys within the config dict, for example:
myvalue.sub2:baz = config[myvalue][sub2] = baz
You can override the explosion of the periods by passing in the --tc-exact argument on the command line.
Changes & News
- 0.4:
- Per feedback from Kumar and others, the eval()'ing of ini-file values has been removed: allowing arbitrary python in the values was more annoying less standard then was worth it.
- Added the --tc-exact command line flag, to block the exploding of name.name values into dicts-within-dicts
- Updated the docs to parse right.
- 0.3:
- Fix documentation examples per Kumar's feedback.
- 0.2:
- Fix pypi packaging issues
- 0.1:
- Initial release. May contain bits of glass.
- Author: Jesse Noller <jnoller at gmail com>
- License: Apache License, Version 2.0
- Categories
- Package Index Owner: jnoller
- DOAP record: nose-testconfig-0.4.xml
|
http://pypi.python.org/pypi/nose-testconfig/0.4
|
crawl-002
|
en
|
refinedweb
|
Build a C# NotifyIcon Scheduled Outlook Mail Checker
by
Peter A. Bromberg, Ph.D.
"There's a fine line between fishing and just standing on the shore like an idiot." -- Steven Wright
I don't know about you, but where I work we all keep Outlook running in the Notification Area so we don't miss any emails. Now this is about as dumb as the Braille they put on the drive-through teller machines. The problem is that the bloated thing takes up about 30MB of memory just to sit there unused for most of the day, in case an email comes in that you don't want to miss.
I like to run my machine with as much available memory as possible, so I set out to create a better solution. The answer is a little app that gets run every 10 minutes by Task Scheduler, checks the Outlook Inbox, and if there is no unread mail, it quits. If there is unread mail, it pops up a Notification Icon Balloon Tip from the Notification Area telling you how many unread emails you have. If you click the balloon within 20 seconds, it opens up Outlook for you. If you don't click within 20 seconds, it assumes you aren't there and it goes away till the next run time.
Simple, efficient, and it saves me 30MB of overhead. If you like this idea, it can be used for other things as well - you could certainly revise it to check your POP email accounts as well. This little app has two parts - there's a Form which basically serves as the container for the NotifyIcon and the code, and there's the NotifyIcon BalloonTip component. I used one by Ivo Closs, mostly because I don't believe in reinventing the wheel and his seems to work just fine for what I needed. There are probably a dozen different implementations of these by different developers.
Let's take a look at the Form code, since you can play with the Notification Icon BalloonTip component on your own after you download the solution below:
using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using NotifyIcon;
namespace TrayNotifyIcon
{
public class Form1 : System.Windows.Forms.Form
{
private System.ComponentModel.Container components = null;
private System.Timers.Timer timer1;
private Microsoft.Office.Interop.Outlook._Application OutlookApp=null;
public Form1()
{
Application.EnableVisualStyles();
Application.DoEvents();
InitializeComponent();
}
protected override void Dispose( bool disposing )
if( disposing )
if (components != null)
components.Dispose();
base.Dispose( disposing );
#region Windows Form Designer generated code
private void InitializeComponent()
System.Resources.ResourceManager resources = new System.Resources.ResourceManager(typeof(Form1));
this.timer1 = new System.Timers.Timer();
((System.ComponentModel.ISupportInitialize)(this.timer1)).BeginInit();
this.timer1.Enabled = true;
this.timer1.Interval = 20000;
this.timer1.SynchronizingObject = this;
this.timer1.Elapsed += new System.Timers.ElapsedEventHandler(this.timer1_Elapsed);
this.AutoScaleBaseSize = new System.Drawing.Size(5, 13);
this.ClientSize = new System.Drawing.Size(115, 54);
this.Icon = ((System.Drawing.Icon)(resources.GetObject("$this.Icon")));
this.Name = "Form1";
this.Text = "Outlook Check";
this.WindowState = System.Windows.Forms.FormWindowState.Minimized;
this.Resize += new System.EventHandler(this.FormResize);
this.Closing += new System.ComponentModel.CancelEventHandler(this.FormClosing);
this.Load += new System.EventHandler(this.Form1_Load);
((System.ComponentModel.ISupportInitialize)(this.timer1)).EndInit();
#endregion
[STAThread]
static void Main()
Application.Run(new Form1());
private BalloonTip TrayNotifyIcon = new BalloonTip();
private ContextMenu TrayContextMenu = new ContextMenu();
private void Form1_Load(object sender, System.EventArgs e)
string name= System.Diagnostics.Process.GetCurrentProcess().ProcessName;
System.Diagnostics.Process[] p = System.Diagnostics.Process.GetProcessesByName(name);
if(p.Length>1)
TrayNotifyIcon.Visible =false;
System.Environment.Exit(0);
this.Hide();
TrayContextMenu.MenuItems.Add("&Exit", new System.EventHandler(this.mnuExit_Click));
TrayNotifyIcon.Text = this.Text; // Help text for MouseLeave on Icon
TrayNotifyIcon.Icon = this.Icon; // Icon for NotifyIcon
TrayNotifyIcon.Form = this; // Form to restore when DoubleClick on Icon
TrayNotifyIcon.ContextMenu = TrayContextMenu; // ContextMenu for RightClick on Icon
TrayNotifyIcon.Visible = true; // Show icon in TaskBar
TrayNotifyIcon.BalloonClick+=new EventHandler(TrayNotifyIcon_BalloonClick);
int x= GetUnreadMessages();
OutlookApp.Application.Quit();
if(x >0)
{
string message="There are " +x.ToString() + "New Emails.";
TrayNotifyIcon.ShowBalloon("Outlook Email", message,NotifyIcon.BalloonTip.NotifyInfoFlags.Info, 100);
this.timer1.Start();
else
TrayNotifyIcon.Remove();
}
private void FormClosing(object sender, System.ComponentModel.CancelEventArgs e)
e.Cancel = true;
private void FormResize(object sender, System.EventArgs e)
if (this.WindowState == FormWindowState.Minimized)
private int GetUnreadMessages()
OutlookApp=new Microsoft.Office.Interop.Outlook.ApplicationClass();
if (OutlookApp != null)
Microsoft.Office.Interop.Outlook.MAPIFolder inbox =
OutlookApp.Session.GetDefaultFolder(
Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderInbox);
return inbox.UnReadItemCount;
return -1;
private void mnuExit_Click(object sender, EventArgs e)
TrayNotifyIcon.Dispose();
private void TrayNotifyIcon_BalloonClick(object sender, EventArgs e)
System.Diagnostics.Process.Start("Outlook.exe");
private void timer1_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
}
}
All the "action" is in the Form_Load handler. First, I check to see if there is another instance of this process running, and if so, I exit gracefully. You don't really need this code, but I've left it in for completeness sake. Then, I set up my TrayNotifyIcon and it's events. Then, I make a call to the GetUnreadMesages method, which simply starts an instance of the OutLook.ApplicationClass, sets it to the Inbox folder, and gets the count of unread Inbox Items, and returns it. Back in the calling method, I call .Quit() to get rid of Outlook, since I am done with it for now. If my count is greater than zero, I pop up a NotifyIcon BalloonTip with the message, and start a 20 second timer.
Note that in the downloadable code, I do even more- I get a hold of the CommandBar and call Send/Receive All to ensure that anything on the server is downloaded first. I also call ReleaseComObject on all the little boogers and finally kill the Outlook process after a 1 second sleep. After all, the whole point of this experiment is to keep Outlook unloaded!
If the user clicks on the balloontip before the 20 seconds are up, the TrayNotifyIcon_BalloonClick event is fired, and Outlook is opened for real using the Process class so that I can read the mail. At this point, we can get rid of our app since it has completed it's mission. The rest is just window dressing. If nobody clicks, the timer kicks in and closes the app for us.
One last thought: Because of the way Microsoft has clobbered the PIA's with Office, you have to build this by setting a COM reference to the exact version of Outlook (9,10,11, XX?) which doesn't exactly make it as portable as could be. However, you can use a workaround with Late Binding. Here's an example:
private int GetUnreadMessagesLateBound()
{
Type outlook;
object oApp;
outlook = Type.GetTypeFromProgID("Outlook.Application");
oApp = Activator.CreateInstance(outlook);
Object oNameSpace = oApp.GetType().InvokeMember("GetNamespace",
BindingFlags.GetProperty, null, oApp, new object[1]{"MAPI"});
Object oFolder = oNameSpace.GetType().InvokeMember("GetDefaultFolder",
BindingFlags.GetProperty, null, oNameSpace, new object[] {6}); // ("6" is inbox)
object oItems = oFolder.GetType().
InvokeMember("UnreadItemCount",BindingFlags.GetProperty,null,oFolder,null);
return (int)oItems;
}
The only thing left to do with this is build a Release, go into the Control Panel and set up Scheduled Tasks to run this every X minutes during the work day. That's it. Enjoy.
Download the VS.NET Solution that accompanies this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave
|
http://www.eggheadcafe.com/articles/20060215.asp
|
crawl-002
|
en
|
refinedweb
|
collective.recipe.omelette 0.9
Creates a unified directory structure of all namespace packages, symlinking to the actual contents, in order to ease navigation.
Detailed Documentation
Introduction
Namespace packages offer the huge benefit of being able to distribute parts of a large system in small, self-contained pieces. However, they can be somewhat clunky to navigate, since you end up with a large list of eggs in your egg cache, and then a seemingly endless series of directories you need to open to actually find the contents of your egg.
This recipe sets up a directory structure that mirrors the actual python namespaces, with symlinks to the egg contents. So, instead of this...:
egg-cache/ my.egg.one-1.0-py2.4.egg/ my/ egg/ one/ (contents of first egg) my.egg.two-1.0-py2.4.egg/ my/ egg/ two/ (contents of second egg)
...you get this:
omelette/ my/ egg/ one/ (contents of first egg) two/ (contents of second egg)
You can also include non-eggified python packages in the omelette. This makes it simple to get a single path that you can add to your PYTHONPATH for use with specialized python environments like when running under mod_wsgi or PyDev.
Typical usage with Zope and Plone
For a typical Plone buildout, with a part named "instance" that uses the plone.recipe.zope2instance recipe and a part named "zope2" that uses the plone.recipe.zope2install recipe, the following additions to buildout.cfg will result in an omelette including all eggs and old-style Products used by the Zope instance as well as all of the packages from Zope's lib/python. It is important that omelette come last if you want it to find everything:
[buildout] parts = ...(other parts)... omelette ... [omelette] recipe = collective.recipe.omelette eggs = ${instance:eggs} products = ${instance:products} packages = ${zope2:location}/lib/python ./
(Note: If your instance part lacks a 'products' variable, omit it from the omelette section as well, or the omelette will silently fail to build.)
Supported options
The recipe supports the following options:
- eggs
- List of eggs which should be included in the omelette.
- location
- (optional) Override the directory in which the omelette is created (default is parts/[name of buildout part])
- ignore-develop
- (optional) Ignore eggs that you are currently developing (listed in ${buildout:develop}). Default is False
- ignores
- (optional) List of eggs to ignore when preparing your omelette.
- packages
- List of Python packages whose contents should be included in the omelette. Each line should be in the format [package_location] [target_directory], where package_location is the real location of the package, and target_directory is the (relative) location where the package should be inserted into the omelette (defaults to top level).
- products
- (optional) List of old Zope 2-style products directories whose contents should be included in the omelette, one per line. (For backwards-compatibility -- equivalent to using packages with Products as the target directory.)
Windows support
Using omelette on Windows requires the junction utility to make links. Junction.exe must be present in your PATH when you run omelette.
Using omelette with eggtractor
Mustapha Benali's buildout.eggtractor provides a handy way for buildout to automatically find development eggs without having to edit buildout.cfg. However, if you use it, the omelette recipe won't be aware of your eggs unless you a) manually add them to the omelette part's eggs option, or b) add the name of the omelette part to the builout part's tractor-target-parts option.
Using omelette with zipped eggs
Omelette doesn't currently know how to deal with eggs that are zipped. If it encounters one, you'll see a warning something like the following:
omelette: Warning: (While processing egg elementtree) Egg contents not found at /Users/davidg/.buildout/eggs/elementtree-1.2.7_20070827_preview-py2.4.egg/elementtree. Skipping.
You can tell buildout to unzip all eggs by setting the unzip = true flag in the [buildout] section. (Note that this will only take effect for eggs downloaded after the flag is set.)
Running the tests
Just grab the recipe from svn and run:
python2.4 setup.py test
Known issue: The tests run buildout in a separate process, so it's currently impossible to put a pdb breakpoint in the recipe and debug during the test. If you need to do this, set up another buildout which installs an omelette part and includes collective.recipe.omelette as a development egg.
Reporting bugs or asking questions
There is a shared bugtracker and help desk on Launchpad:
Change history
0.9 (2009-04-11)
- Adjusted log-levels to be slightly less verbose for non-critical errors. [malthe]
0.8 (2009-01-14)
- Fixed 'OSError [Errno 20] Not a directory' on zipped eggs, for example when adding the z3c.sqlalchemy==1.3.5 egg. [maurits]
0.7 (2008-09-10)
- Actually add namespace declarations to generated __init__.py files. [davisagli]
- Use egg-info instead of guessing paths from package name. This also fixes eggs which have a name different from the contents. [fschulze]
0.6 (2008-08-11)
- Documentation changes only. [davisagli]
0.5 (2008-05-29)
- Added uninstall entry point so that the omelette can be uninstalled on Windows without clobbering things outside the omelette path. [optilude]
- Support Windows using NTFS junctions (see) [optilude]
- Ignore zipped eggs and fakezope2eggs-created links. [davisagli]
- Added 'packages' option to allow merging non-eggified Python packages to any directory in the omelette (so that, for instance, the contents of Zope's lib/python can be merged flexibly). [davisagli]
0.4 (2008-04-07)
- Added option to include Products directories. [davisagli]
- Fixed ignore-develop option. [davisagli]
0.3 (2008-03-30)
- Fixed test infrastructure. [davisagli]
- Added option to ignore develop eggs [claytron]
- Added option to ignore eggs [claytron]
- Added option to override the default omelette location. [davisagli]
0.2 (2008-03-16)
- Fixed so created directories are not normalized to lowercase. [davisagli]
0.1 (2008-03-10)
- Initial basic implementation. [davisagli]
- Created recipe with ZopeSkel. [davisagli]
Contributors
- David Glick [davisagli]
- Clayton Parker [claytron]
- Martin Aspeli [optilude]
- Florian Schulze [fschulze]
- Maurits van Rees [maurits]
- Malthe Borch [malthe]
-
|
http://pypi.python.org/pypi/collective.recipe.omelette/
|
crawl-002
|
en
|
refinedweb
|
Introduction
The
__name__ special variable is used to check whether a file has been imported as a module or not, and to identify a function, class, module object by their
__name__ attribute.
Remarks.
__name__ == '__main__'
The special variable
__name__ is not set by the user. It is mostly used to check whether or not the module is being run by itself or run because an
import was performed. To avoid your module to run certain parts of its code when it gets imported, check
if __name__ == '__main__'.
Let module_1.py be just one line long:
import module2.py
And let's see what happens, depending on module2.py
Situation 1
module2.py
print('hello')
Running module1.py will print
hello
Running module2.py will print
hello
Situation 2
module2.py
if __name__ == '__main__': print('hello')
Running module1.py will print nothing
Running module2.py will print
hello
function_class_or_module.__name__
The special attribute
__name__ of a function, class or module is a string containing its name.
import os class C: pass def f(x): x += 2 return x print(f) # <function f at 0x029976B0> print(f.__name__) # f print(C) # <class '__main__.C'> print(C.__name__) # C print(os) # <module 'os' from '/spam/eggs/'> print(os.__name__) # os
The
__name__ attribute is not, however, the name of the variable which references the class, method or function, rather it is the name given to it when defined.
def f(): pass print(f.__name__) # f - as expected g = f print(g.__name__) # f - even though the variable is named g, the function is still named f
This can be used, among others, for debugging:
def enter_exit_info(func): def wrapper(*arg, **kw): print '-- entering', func.__name__ res = func(*arg, **kw) print '-- exiting', func.__name__ return res return wrapper @enter_exit_info def f(x): print 'In:', x res = x + 2 print 'Out:', res return res a = f(2) # Outputs: # -- entering f # In: 2 # Out: 4 # -- exiting f
Use in logging
When configuring the built-in
logging functionality, a common pattern is to create a logger with the
__name__ of the current module:
logger = logging.getLogger(__name__)
This means that the fully-qualified name of the module will appear in the logs, making it easier to see where messages have come from.
|
https://pythonpedia.com/en/tutorial/1223/the---name---special-variable
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
How to make StatusInteractivePopUpWindow active by default
Hi,
When I use
StatusInteractivePopUpWindowas a UI base for an extension, the vanilla List within is not active (in focus?) by default. So when I press its shortcut and the window opens, I need to click on the List for it to be active. It seems that there is no problem like this if I use vanilla’s Window, but the problem does still happen with FloatingWindow. I was wondering why that is? Is there a way to make a popup window that would be active? Can there be an argument in
StatusInteractivePopUpWindowto control this? I’m not sure with the documentation on the website — mentions Window in the sample code instead of
StatusInteractivePopUpWindow.
While I’m here, is there an option to bind “Escape” key to a close() command?
Thanks!
Anya
the different types of windows in macOS have different behaviours. this is documented in the macOS Human Interface Guidelines, see Window Anatomy > Types of Windows.
it’s possible to change some of these default behaviours by accessing the underlying AppKit objects directly. here’s an example showing how you can make the list inside
StatusInteractivePopUpWindowbecome the first responder, and how to use the Escape and Enter keys to cancel/confirm the dialog.
from mojo.UI import StatusInteractivePopUpWindow from vanilla import Button, List, HorizontalLine class StatusInteractivePopUpWindowDemo(object): def __init__(self): self.w = StatusInteractivePopUpWindow((200, 300)) self.w.myButton = Button((10, 10, -10, 20), "My Button") self.w.myList = List((10, 40, -10, -55), ['a', 'b', 'c']) self.w.line = HorizontalLine((10, -40, -10, 1)) self.w.cancelButton = Button((10, -30, 70, 20), "Cancel", callback=self.cancelCallback) self.w.okButton = Button((90, -30, 70, 20), "OK", callback=self.okCallback) # make `myList` the first responder self.w.getNSWindow().makeFirstResponder_(self.w.myList.getNSTableView()) # define the OK button as the default one (Enter key) self.w.setDefaultButton(self.w.okButton) # bind the Cancel button to the Escape key self.w.cancelButton.bind(chr(27), []) self.w.open() def cancelCallback(self, sender): print('cancelled') self.w.close() def okCallback(self, sender): # get all items in list items = self.w.myList.get() # get list selection selection = self.w.myList.getSelection() # get selected list items selectedItems = [item for i, item in enumerate(items) if i in selection] print(selectedItems) self.w.close() StatusInteractivePopUpWindowDemo()
hope this helps!
ps. see also @tal’s reply to a similar question here (33m18s)
Thank you so much, Gustavo!
|
https://forum.robofont.com/topic/615/how-to-make-statusinteractivepopupwindow-active-by-default
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Why most Front End Dev interviews (#JavaScript #Typescript) are Shit
Kristian Ivanov
Originally published at
hackernoon.com
on
・12 min read
I am sorry I din’t censored shit, but shit pretty much === s**t and everyone would know what I mean either way.
First of all a bit of a back story
I am changing my job ( wohoo for me and ohno for my team). I was dev, I became a TL, we were a dev team, then we became game team (dev, interface and animation) a while ago, then I decided to quit (if you want to know why, feel free to ask, it is npt the point of the article). A friend of mine, works in a company that works the same field as my current company and wouldn’t stop bothering me until I gave their company a shot. His company however was looking for a senior game/front end dev. Considering that I have worked at my current company for nearly 6 years, I decided to give a shot, if only to see firsthand how the current market for developers was, outside of my company.
Second
both companies should, in my opinion, stay unnamed. Just assume they are generic companies.
Third
Lets call them interview problems, seen from TL and dev perspective, when a TL tries a dev interview. It may sound weird or strange, but it is a nice perspective of a guy that has made interviews with devs in a company and the same guy trying an interview as a dev in another company.
Fourth
I will split this into two parts. You will know why in a bit.
Fourth point one – human parts
The interview was first conducted with the person, which represents the dev team and their manager. Their manager, of course focused on my career, what I have done, how much (as a percentage) did my job as a TL allowed me to write code, with what IDEs did I write my code, what were my side projects, why did I wrote them in those programming languages and so on. It was pretty understandable. However, I expected more of their dev TL. He had two questions, for about of an hour of interview. Question one – have you used Gulp; and question two – what is your approach in learning new technology or new API or new framework, etc.
Apparently I did good enough impression to be asked to make a test at the company. I wasn’t sure whether or not it was the psychological test, the test to be sure how much info I will be allowed to know and not leek outside, or the tech test. I went there, talk with their manager. He was a pretty nice guy with understanding, reasonable motives and so on. I found out it was the technical test that I was about to do. Which was OK, it isn’t like I was going to study for it anyway. The following are my impressions of the test.
Fourth point two – test
Disappointment one
It is only the test. You don’t talk with devs, their team lead, an architect or anyone else. It. Is. Just. The. Test.
Disappointment two – the test itself
These aren’t typos or writing mistakes. Everything is written as it was in the test.
The test is a three pages piece. It is spitted in Java Script, Node.js & Gulp, TypeScript.
Now to the questions. Some are omitted in order to avoid repetition.
Questions about JavasScript
- What is a potential pitfall with using typeof bar === ‘object’ to determine if bar is an object? how can this pitfall be avoided? Answer: Welcome to 2015–6 questions on forums. It is a reliable way, but typeof bar === ‘object’ returns true. Keep it in mind. You will rarely (never) use it a t work, but it is apparently a very common JS question for interview and tests.
- What is NaN? What is its type? How can you reliably test if a value is equal to NaN? Answer: Again with the obvious old questions. NaN stand for Not A Number.It’s type however is number, because JavaScript is JavaScript. If you want to test if something is NaN, JavaScript has isNan method to do so. It is not intuitive or very common to use it, but it exists.
- In what order will the numbers 1–4 be logged to the console when the code below is executed? why?
(function(){ console.log(1); setTimeout( function(){ console.log(2)}, 1000); setTimeout( function(){ console.log(3)}, 0 ); console.log( 4 ); })();
The answer again is old. A variant of it can be found here with nice explanations.
Basically – the browser interpreting the function will go like this:
I have to write 1
I have to write 2 in a while
I have to write 3 in while. The fact the timeout is set with 0ms doesn’t matter. If you have used JS event base management with setTimeout(0) you will know when to use this tremendously ugly fix (don’t ever use it!).
I have to write 4
The function does not return anything so the browser will output “undefined”
I had to write 3 in 0ms, so the “3” is logged
I had to write 2 in a 1000ms, so the “2” is logged.
Your whole answer is – 1,4 undefined, 3, 2
- What will the code bellow output? Explain your answer.
console.log( 0.1 + 0.2 ); console.log( 0.1 + 0.2 == 0.3 );
What do you think 0.1 + 0.2 results in JavaScript? 0.3? Hell no! JS is famous for its float arithmetic problems. 0.1 = 0.2 results in 0.30000000000000004 (with more or less zeroes here and there).
So, your answers are: 0.30000000000000004 and false
- What will be the output of the following code:
for( var i = 0; i < 5; i++){ setTimeout( function(){ console.log( i ); }, i * 1000 ); }
Well, welcome to 2016–7 generic question about JS, setTimeouts and closures. It is the first question in this article. Yes, there are other articles, that summarize what people get asked on JS interview, this one is just a bit more detailed and written from the perspective on someone who made dev interviews, and someone who just went to a dev interview.
I have seen this in so many articles it literally hurts. If anyone has read something about the language at any point he/she can answer without understanding it at all. A lot better question, in my opinion, is this – Explain what a closure is and why would you use it. Give at least 2 examples (one can be a lucky guess)
- What would the following code output in the console?
console.log( "0 || 1 = " + (0 || 1 )); console.log( "1 || 2 = " + (1 || 2 )); console.log( " 0 && 1 = " + ( 0 && 1 ));
Pretty obvious… I am including it because it didn’t had the usual “please explain why”. If it had it, I would probably write – 1, 1, 0, because that is how || and && work.
- What will the following output in the console:
console.log((function f(n){return ((n > 1) ? n * f(n-1) : n)})(10));
Answer: It is pretty obvious function with recursion that calls itself with a simple ternary → 10 * 9 * 8 * 7 * 6 * 5 *4 *3 * 2 *1
I was honestly getting bored at this point.
- Make a function ‘sum’ that returns a sum of two values. this function should be called as these examples: console.log( sum( 5, 3 )); //returns 8 console.log( sum(5)(3));// returns 8
Yupee!!! A challenge has come before us! This is awesome! Especially after 8 generic JavaScript questions that people should be able to answer even if they are half brain dead.
Unfortunately it is both a challenge and it is not.
At first glance it is intimidating and it is weird, and I quite actually like it. Unfortunately I have recently read on Medium several articles about currying written by Joel Thomas. They can be seen as a working use here in “Challenge: Program without variables #javascript” and as an explanations here in “Currying in JavaScript ES6”. So the question itself wasn’t very challenging. Of course there are very few people that I know, that are familiar with currying and I know even fewer (0) that have actually used it. Joel Thomas examples and description are tremendously useful and you should read them.
- What is “callback hell” and how can it be avoided? Answer: there is a whole websitecallbackhell.com dedicated to this term. To put it in layman’s terms it is when developers write in JavaScript and create a bunch of functions that are triggered by one callback after another. This creates a pretty difficult environment to test and debug the product. If you want to fix it you can do the following things: Keep your code shallow – meaning, keep it small and clean(KISS) Modularize Handle every single error – there is a concept in Node.js called the error first callback. You can read more about it here. This will be mentioned later on as well.
Questions about Node.js & Gulp
Those somehow manage to be even more generic. I don’t know how.
- What is Gulp? Anwer: (copied from their webpage) – gulp is a toolkit for automating painful or time-consuming tasks in your development workflow, so you can stop messing around and build something. Personal answer: I have mainly seen it use as a build base.
- Is Gulp base on node.js? Answer: I actually wasn’t quite sure. Is there Gulp in Node.js – Yes. Is there Gulp in other environments – Yes. According to their website – Integrations are built into all major IDEs and people are using gulp with PHP, .NET, Node.js, Java, and other platforms. I will, however, be forced to say yes, because the people that made Gulp explain it as – Use npm modules to do anything you want + over 2000 curated plugins for streaming file transformations. I haven’t actually seen it used for anything JS as well.
- What are modules in Node.js?
- What is the purpose of the packaje.json file? (Yes, they have written it with a typo like that)
- Explain npm in Node.js I will answer the above three as the same question, since they are. First of all what are modules in Node? – they are basically the way libraries, “classes” and so on are represented in Node. Some articles for the topic – w3schools. They (modules) can be written as AMD components and CommonJS components. Amd according to wikipedia. Amd and CommonJS modules comparison from RequireJS can be found in a few sections here. Secondly the packaGe.json file – its documentation can be found here. Roughly, the package.json allows you to specify which libraries a project needs, which libraries versions the project needs and it makes your build/install easily reproducable. As for what npm is, the abbreviation stands for Nodule Package Manager. Is it self explanatory? I believe it is. It is the thing that allows you to publish and use external libraries, update them, save them and manage them for a given project. if you use Node you should take look at the npm list of commands –  install , uninstall , update , ls and its flags – -g, – save, and so on. The npm docs can be found here however the difference between global install or install with – save can be found and memorized mainly trough using it.
- What is error-first callback? Answer: I have mentioned it across the callbackhell earlier. It basically means put the error as the first argument of the function, success second.
Questions about TypeScript
- How Do You Implement Inheritance in TypeScript? Answer: using the extend keyword. TypeScript has it, JavaScript does as well.
- How to Call Base Class Constructor from Child Class in TypeScript? Answer: super() or super( arghs )
- What are Modules in TypeScript? Are you bored? I am. The questions are generic and are seen throughout similar articles and questions in StackOverflow and etc. Anwer: A somehow thorough description can be found here. Roughly – they (the modules) are an extended version of the node ones. You can use external and internal modules, effectively creating namespaces.
- Which Object Oriented Terms are Supported by TypeScript? Answer: Read it like – what OOP keywords and principles does TypeScript has that JavaScript does not? (classes, interfaces, extending, types, public/private/protected variables) Of course all of those things can be emulated in JS by using object.defineProperty() (of which I am a huge fan), js .extend, jQuery or Zepto extend and fn.extend.
- What is tsconfig.json file? Answer: Read it as – have you ever used TypeScript? If not here is your summary – The presence of a tsconfig. json file in a directory indicates that the directory is the root of a TypeScript project. The tsconfig. json file specifies the rootfiles and the compiler options required to compile the project.
Summary
That was it. I realize that there are several articles out there, Medium included, that discuss JS related questions for job interviews. I just decided to share my own recent experience, which is a bit different, since it is from the perspective of both a guy that had interviewed developers and that has been just interviewed as a developer himself.
All of those questions are generic to the point that the whole test took me roughly 20 minutes (reading and writing, by hand on a piece of paper included). If you ask me, I would strongly recommend that companies and TLs and tech guys take those kind of tests, smash them and throw them not in the nearest garbage can, but the one a few blocks away, so there is no chance for someone to see them and connect them to their authors. Putting your company header on the top of each page isn’t necessary as well…
Don’t get me wrong, you can use this test to get someone’s understating tested. But it is in a very narrow way and the questions being generic enough, that I have seen literally 80% of the questions in the last week either on Medium, StakOverflow or another forum, means that someone can answer them quickly without actually understanding them.
So, instead I would prefer to be both interviewed by, and to interview an actual human being, instead of doing a test or read a test. It is a lot easier to talk with someone and ask him some of those questions, followed by clarifying follow up questions which you can use to test if someone actually knows what he or she talks about or he or she has just memorized it after reading it a gazillion times. It is also a nice way to find a person that has gotten an answer wrong because of some pitfall, but is a good enough developer to learn it in the future and the only reason he or she hasn’t answered correctly on the first try is because they haven’t actually used it. You can also see how the people you interview think when you ask them those questions yourself and discuss the questions with them. Even if their answer is wrong, cause by something, if their thought process is correct I am willing to give them a shot and help them out.
Most of those questions fall into two categories – you have either come across it and it was a painful enough experience to remember it (because it seemed really illogical and strange at the time or just because you weren’t familiar with JS well enough at the time (0.1 + 0.2 = 0.300000004 being a pretty good example of JS weirdness, or the fact that NaN is a number)) or things that are abstract enough that you have not yet encounter. Both of those categories can be memorized for tests. Memorizing them for tests does not mean an understanding of JavaScript ant does not mean the person can apply them correctly in his work. Which is again why I prefer human to human interaction or making a project/task and discussing it, instead of writing answers on a piece of paper, that is rated a week after.
By the way – the test discussed above didn’t discussed any design patterns or anything more deep than understanding basic principles of the language which in my opinion is a bit strange, considering the fact that it was designed for senior developers.
This is just my opinion of course. If anyone thinks otherwise on anything I have said/written or any of my answers or explanations seem incorrect in any way I am open for discussion in the comments :)
I hope this was useful for anyone or at least made anyone think about those things.
And I really hope that some TL, instead of inviting them, giving them a couple sheets of paper, leaving them in a conference room (in which you can open, laptop, computer or phone and just copy answers and don’t think about it. I actually know people that have passed tests like that and have gotten decent jobs because of it) will start making interviews with people and actually talk with them to see how well they are quipped to be able to work in the team.
Technical interviews that focus on having a bullshit paper test are useless, I would personally refuse to take part in any and would walk out if insisted as they serve no purpose and focus on something like that tells me the people I'm interviewing for are not very serious.
Any sensible person generally should answer to the idiotic trivia bs questionnaires with the words "I don't give a fuck, and if I really need to find out I'll either run the code or google".
It's good to know about the common issues of the language you're working in, but to focus on those more than just by asking the question "are you familiar with how bad the typeof operator, floating point operations etc. are in JavaScript?" or "have you heard of the good parts of JavaScript" is ridiculous..
What interviews should focus on is practical experience, trying to find out if you get along, interest and experience in RELEVANT areas around the work they'll be doing.
Basically what a technical interview should consist of is more along the lines of:
... and so on.
When you ask someone e.g. "how well do you know SQL" and the answer is "quite well" you should feel secure that you can hire them without giving them a quiz about it, as you can always fire them if it turns out they lied.
Also the question of knowledge of a specific topic rarely even matters when dealing with suitable people as they can learn the things they don't know. You generally build a team so people with varying levels of experience and different kinds of backgrounds can help each other out, and not so everyone knows the details of all the tools you are using. More diversity is better in this as well.
Oh btw the test:
Is also an immediate red flag that should tell you "NEVER WORK HERE" - no sane person would ever write a function with such obviously differing return values depending a bit on the number of arguments. Holy hell that would make life working with their codebase a pain in the ass, and JavaScript is already a big enough pain in the ass without artificially making it worse.
IMO they just wanted to see if anyone knows what currying is. This being said there are two things
Yes to all of your points!
I would a few more points about asking them to show me some code samples from their project, which they believe to be good because they have found an interesting solution to an interesting problem. And briefly discuss it with them, to get a glimpse of how they think and how they approach problems.
I stayed for the test, because it is the first time somebody handed me a test for technical interview. I did it way too fast, took pictures of it and decided to write the above rant. If even on TL or someone in charge change its ways from this to something better it's worth it.
About the sum function one, what is the best way of doing that in ES5? I did like this, but I don't know if it would be the most appropriate way.
function sum(x) {
if (arguments[1] == null) {
return (function(y) {
return 'With currying: ' + (x + y);
})
} else {
return 'Without currying: ' + (arguments[0] + arguments[1]);
}
}
Btw, nice article!
Thanks! I like that you found it useful.
Your solution is pretty straightforward and descriptive. I like it.
I think you're right about having to painfully learn those concepts if you've been programming seriously for at most year. Anything JS-specific on the test can be solved with a few Google searches.
Senior positions should require skills in more abstract domains such as scalability and iteration. Code exams don't really quantify things like that. I would be skeptical going anywhere with a company which spends time asking you about floating point!
P.S. Questions about currying & "callbacks"? It's about continuations/futures and function literals these days IMO;)
I couldn't agree more. I don't think anyone who is invited as for a senior developer position and get asked about a floating point problem will go to work there.
I expected at the very least some design patterns, some better stuff that come from the "new" standard or literally anything more complicated than this. If I was give a more large code example and get asked to identify potential performance issues, or to find other problems, it would have been better in my opinion.
I agree with your P.S as well ;)
|
https://dev.to/k_ivanow/why-most-front-end-dev-interviews-javascript-typescript-are-shit-2hc
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
How Square makes its SDKs
At Square we leverage the OpenAPI standard, Swagger Codegen & GitHub to build and deliver our client SDKs in a scalable way.
The developer platform team at Square is a little different than most. Rather than build separate APIs for our developer products, we focus on exposing the APIs that our first-party product use to create a seamless experience for developers. We have many upstream teams that are stakeholders in our external facing APIs, constantly wanting to expose new features and make improvements. This was an important factor when deciding how we should build our SDKs; we did not want our team to be a bottle-neck, where product teams would have to wait on us to finish updating SDKs before releasing new features. The primary way we avoid that is with SDK generation.
SDK Generation
Instead of writing each of our SDKs by hand (which would not only be time consuming, error prone, and slow down the release of new features into the SDKs) we use a process that relies heavily on SDK generation. There are many flavors of SDK generation out there, so if you are looking into adopting a similar method for your SDKs, be sure to look at a range of the possibilities and find the right one for you. Our preferred flavor uses the OpenAPI specification to define our API endpoints and Swagger Codegen to programmatically generate the code for the SDKs.
API specification
We use the OpenAPI standard to define our APIs. For us, this is a
JSON file that defines the url, what kind of HTTP request to make, as well as what kind of information to provide, or expect to get back for each our API endpoints. Our specification is made up of 3 main parts: general info/metadata, paths, and models.
General info/metadata
This part of the spec contains some of the descriptive information for the API overall, like where you can find licensing information, or who to contact for help.
"info": { "version": "2.0", "title": "Square Connect API", "description": "Client library for accessing the Square Connect APIs", "termsOfService": "", "contact": { "name": "Square Developer Platform", "email": "developers@squareup.com", "url": "" }, "license": { "name": "Apache 2.0", "url": "" } },
Paths
These describe the individual endpoints (or URL paths) for the API. It describes what kind of
HTTP request to make, how it should be authorized, and what kind of information you should add to the request, and what you should expect to get back. In the example below, you can see that it is a
POST request, there are a couple required parameters in the URL, another one in the body, and you get back a
CreateRefundResponse object.
"/v2/locations/{location_id}/transactions/{transaction_id}/refund": { "post": { "tags": [ "Transactions" ], "summary": "CreateRefund", "operationId": "CreateRefund", "description": "Initiates a refund for a previously charged tender.\n\nYou must issue a refund within 120 days of the associated payment. See\n(this article)[] for more information\non refund behavior.", "x-oauthpermissions": [ "PAYMENTS_WRITE" ], "security": [ { "oauth2": [ "PAYMENTS_WRITE" ] } ], "parameters": [ { "name": "location_id", "description": "The ID of the original transaction\u0027s associated location.", "type": "string", "in": "path", "required": true }, { "name": "transaction_id", "description": "The ID of the original transaction that includes the tender to refund.", "type": "string", "in": "path", "required": true }, { "name": "body", "in": "body", "required": true, "description": "An object containing the fields to POST for the request.\n\nSee the corresponding object definition for field details.", "schema": { "$ref": "#/definitions/CreateRefundRequest" } } ], "responses": { "200": { "description": "Success", "schema": { "$ref": "#/definitions/CreateRefundResponse" } } } } },
Models
The models describe the different objects that the API interacts with. They are used primarily for serializing the JSON response from the API into native objects for each language. In this one,
CreateRefundResponse, you can see it has a couple other models that it is comprised of, as well as a description and even an example of what the response looks like.
"CreateRefundResponse": { "type": "object", "properties": { "errors": { "type": "array", "items": { "$ref": "#/definitions/Error" }, "description": "Any errors that occurred during the request." }, "refund": { "$ref": "#/definitions/Refund", "description": "The created refund." } }, "description": "Defines the fields that are included in the response body of\na request to the [CreateRefund](#endpoint-createrefund) endpoint.\n\nOne of `errors` or `refund` is present in a given response (never both).", "example": { "refund": { "id": "b27436d1-7f8e-5610-45c6-417ef71434b4-SW", "location_id": "18YC4JDH91E1H", "transaction_id": "TRANSACTION_ID", "tender_id": "TENDER_ID", "created_at": "2016-02-12T00:28:18Z", "reason": "some reason", "amount_money": { "amount": 100, "currency": "USD" }, "status": "PENDING" } }, "x-sq-sdk-sample-code": { "python": "/sdk_samples/CreateRefund/CreateRefundResponse.python", "csharp": "/sdk_samples/CreateRefund/CreateRefundResponse.csharp", "php": "/sdk_samples/CreateRefund/CreateRefundResponse.php", "ruby": "/sdk_samples/CreateRefund/CreateRefundResponse.ruby" } },
You can see the most recent version of our specification to date version in our Connect-API-Specification repo on GitHub.
The specification is an important part of our generation process, as it is the source of truth about how our APIs work. When other teams want to expand their APIs, release new APIs, or just increase the clarity of a model description, they can make an edit to this single file and have their changes propagate to all of the client SDKs. We actually generate most of our specification from the files that describe the internal service to service communication for even more process automation and easier changes.
Swagger Codegen
Now that we have the specification for our APIs ready to go, how do we turn it into a client facing SDK? The answer is Swagger Codegen. Swagger Codegen is an open source project supported by Smartbear (just like the other Swagger tools) that applies your Open API specification to a series of templates for SDKs in different languages with a little configuration sprinkled in.
Templates
The templates use a language called mustache to define their parts, and for the most part look and read like a file in the desired language. The one below is part of the templates for out PHP SDK. You can see that useful things like code comments are auto generated as well, so that the end SDK can have built in documentation, snippets & more.
<?php {{#models}} {{#model}} /** * NOTE: This class is auto generated by the swagger code generator program. * * Do not edit the class manually. */ namespace {{modelPackage}}; use \ArrayAccess; /** * {{classname}} Class Doc Comment * * @category Class * @package {{invokerPackage}} * @author Square Inc. * @license Apache License v2 * @link */ class {{classname}} implements ArrayAccess { ...
Configuration
These are actually much less complex, and are essentially small
json files that describe aspects of your SDK, generally around how it fits into the relevant package manager.
{ "projectName": "square-connect", "projectVersion": "2.8.0", "projectDescription": "JavaScript client library for the Square Connect v2 API", "projectLicenseName": "Apache-2.0", "moduleName": "SquareConnect", "usePromises": true, "licenseName": "Apache 2.0" }
Because the Codegen project is so active, we actually check in a copy of our template files for each of our supported SDKS, and pin to specific Codegen versions to make sure that we don’t accidentally push breaking changes to our users as a result of all the automation. You can see the all of the templates and config files that power the {Java, PHP, C#, Python, Ruby, JavaScript} SDKs in the same repository as our specification file: Connect-API-Specification.
Other Ideas
Our process has evolved quite a bit, with tools like Travis CI making big impacts in the process. You can use CI & CD tools to make the process more automated but be sure that you have a good suite of test coverage to help prevent unexpected changes from creeping into your released code.
Hope your enjoyed the look into our SDK generation process. You can also see a recorded talk I gave at DevRelCon about the subject here. If you want to learn more about our SDKs, or other technical aspects of Square, be sure to follow on this blog, our Twitter account, and sign up for our developer newsletter!
|
https://developer.squareup.com/blog/how-square-makes-its-sdks/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
Hi there,
In this instructable I will show how I made a really simple Bluetooth Low Energy presence detector, using my smart wristband and a relay I was able to control the ligths of my room;
Everytime I go in, turn the light on and if I left the room or cut the bluetooth connection, the lights turn off.
Step 1: Parts
I'm using a ESP32 Feather but any other will work
1 5v Relay
1 TIP31C Transsitor
1 BLE Server device (Any beacon device)
The TIP31C its ment to control the relay, beacuse the 3V3 digital outputs of the ESP32 are not enough in voltage and current
The relay to control the 120V lights and the wristband to detect the presence of the person.
Step 2: Circuit
This is really simple, the pin number 33 of the ESP32 goes to the base of the transistor, with this we can add the 5V VCC signal and control a bigger voltage with the 3V3 voltage output, then, with the relay we can controll then the 120V of the light.
Step 3: Code
#include "BLEDevice.h"
int Lampara = 33; int Contador = 0;
static BLEAddress *pServerAddress; BLEScan* pBLEScan; BLEClient* pClient; bool deviceFound = false; bool Encendida = false; bool BotonOff = false;
String knownAddresses[] = { "your:device:mac:address"}; unsigned long entry;); }
class MyAdvertisedDeviceCallbacks: public BLEAdvertisedDeviceCallbacks { void onResult(BLEAdvertisedDevice Device){ //Serial.print("BLE Advertised Device found: "); //Serial.println(Device.toString().c_str()); pServerAddress = new BLEAddress(Device.getAddress()); bool known = false; bool Master = false; for (int i = 0; i < (sizeof(knownAddresses) / sizeof(knownAddresses[0])); i++) { if (strcmp(pServerAddress->toString().c_str(), knownAddresses[i].c_str()) == 0) known = true; } if (known) { Serial.print("Device found: "); Serial.println(Device.getRSSI()); if (Device.getRSSI() > -85) { deviceFound = true; } else { deviceFound = false; } Device.getScan()->stop(); delay(100); } } };
void setup() { Serial.begin(115200); pinMode(Lampara,OUTPUT); digitalWrite(Lampara,LOW); BLEDevice::init(""); pClient = BLEDevice::createClient(); pBLEScan = BLEDevice::getScan(); pBLEScan->setAdvertisedDeviceCallbacks(new MyAdvertisedDeviceCallbacks()); pBLEScan->setActiveScan(true); Serial.println("Done"); }
void Bluetooth() { Serial.println(); Serial.println("BLE Scan restarted....."); deviceFound = false; BLEScanResults scanResults = pBLEScan->start(5); if (deviceFound) { Serial.println("Encender Lamara"); Encendida = true; digitalWrite(Lampara,HIGH); Contador = 0; delay(10000); } else{ digitalWrite(Lampara,LOW); delay(1000); } }
void loop() { Bluetooth(); }
Step 4: PCB for Light Control
I made this circtuit on a protoype pcb to make things cleaner.
Step 5: Done
And you are done!
You can use this code to open doors instead, or to control different things
I hope you like my instructable, and if you have any question make me a comment or send me an inbox, I'll be happy to answer
11 Discussions
4 months ago
I have the same idea project
Adjustable time for missing Bluetooth relay off
Language: German Deutsch
Reply 4 months ago
Nice! Now the interesting part is what you connect to the rellay or what do you activate with when you are near the sensor
Reply 4 months ago
Use Bluetooth relay: if I leave home I want the TV to not turn on. Children should not watch everything on TV without me. If I leave the workplace, my monitor automatically turns off with the help of a relay. The room in which I often go in and out, but where should I not go to strangers if I am not around? The door is automatically locked if I move away from this place.
7 months ago
That is what i search long time. I am not a professional programmer and project like this help me to create my own projects.What will make from this project?
From long time ago i have a idea for good car imobiliser. Now is time to realize it. With little adding a code i will make it to work with 3-4 MACs (fitnes bands or ibeacons), will add deep sleep in code too.
When you come to a car and open it (my is with keyless) CAN-BUS wakeup signal from a car will wakeup ESP32 from a deep sleep and he will check for a deisred MAC Address are inside in a range defined by RSSI. If it a present they will allow starting a car. All this procedure from unlock to allowing start take between 0.8 to 1.5 sec.. For more security and to can work with android phones (because now won't) will take UUID for checking instead of a MAC Address. Who don't work with android units? Because from version 5 or 6 for more security BLE in android start anytime with different random generated MAC Address. UUID are used for identifying different services and are unique.
Sorry for my bad english :(
If anyone can help with application for android with widged button who will send specific UUID via BLE are welcome.
Thanks to Lindermann95.
Regards
Reply 6 months ago
Hi there, I'm sorry, I haven't been around lately, but this car application is one of my plans too
That's a good idea, the random generation MAC like the one that apple use on the WiFi connction. Maybe that could be the answer, Using WiFi...
But I havent work that much with WiFi, BLE would be a suitable option at the moment, I'll work in this project and I'll keep in touch with you, have you already done it?
Reply 6 months ago
Hi,
i made it and he work fine at this time. For Authorization I use UUID, because MAC any time when you start advertising service are different this is in BLE standard. Now I use one application who create GATT server with UUID desired by me. I will write more in few days because this is hobby for me and now I have a many professional work.
Regards
7 months ago
At this time i found you use library ESP32 BLE Arduino by Neil Kolban.
That is right? Are you use arduino ide for this project or another one?
Regards
Reply 7 months ago
Yes! I’m using the Neil Kolban library in the Arduino IDE,
I think that the code didnt pase the way I wanted... but let me change it and add some comments
Let me know if you have more questions
Reply 7 months ago
Thank you for fast anwer.
Ok that fine. From the code i assumed it was written on Adruino but not 100% sure.
If it is on platformio then will have include Arduino.h and all will be clear.
Regards
Question 7 months ago
Hi,
You are add one source code but this code are for ???? platform
Ok i see you add library BLEDevice.h but from where.
I have a interest about this project and will be good to add more light about him :))
Regards
Answer 7 months ago
Hi there! (I’ll answer in the other comment)
|
https://www.instructables.com/id/ESP32-BLE-Presence-Detector/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
If
T is a compound type (that is, array, function, object pointer, function pointer, member object pointer, member function pointer, reference, class, union, or enumeration, including any cv-qualified variants), provides the member constant
value equal
true. For any other type,
value is
false.
Compound types are the types that are constructed from fundamental types. Any C++ type is either fundamental or compound.
#include <iostream> #include <type_traits> int main() { class cls {}; std::cout << (std::is_compound<cls>::value ? "T is compound" : "T is not a compound") << '\n'; std::cout << (std::is_compound<int>::value ? "T is compound" : "T is not a compound") << '\n'; }
Output:
T is compound T is not a compound
© cppreference.com
Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
|
https://docs.w3cub.com/cpp/types/is_compound
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Does Colyseus Support Host-Client Online Mode ?
For example, a Colyseus client is a room host, others can join the room. The dedicated server just relay the state between host and client.
Can the schema-based state synchronization apple to the above situation?
- dpastorini last edited by dpastorini
hi @stromkuo , that's not the way it works, clients can join or create rooms but not "host" a room.
The way you could achieve something like that would be with a custom implementation where you allow only an specific client to create rooms and the other ones only join those rooms if were already created.
Also, you could save in the room metadata which user create it and give it more "features" than to the other joined clients, but again Colyseus by default will handle the server-client communication and sync, everything else will be on your own.
- endel administrator last edited by endel
If you're going to just relay the messages, I don't think you'd use the state at all. I actually have a room handler that I wanted to provide in the core of Colyseus in case someone need this, but I haven't tested or experimented with it yet. You can try it out and adapt to your needs though 😅
import { MapSchema, Schema, type } from '@colyseus/schema'; import { Client } from '.'; import { Room } from './Room'; class Player extends Schema { @type('string') public id: string; @type('boolean') public connected: boolean; public isMaster: boolean; } class State { @type({ map: Player }) public players = new MapSchema<Player>(); } /** * client.joinOrCreate("punroom", { * maxClients: 10, * allowReconnectionTime: 20 * }); */ export class PUNRoom extends Room<State> { public allowReconnectionTime: number = 0; public onCreate(options) { this.setState(new State()); if (options.maxClients) { this.maxClients = options.maxClients; } if (options.allowReconnectionTime) { this.allowReconnectionTime = Math.min(options.allowReconnectionTime, 40); } if (options.metadata) { this.setMetadata(options.metadata); } } public onJoin(client: Client, options: any) { const player = new Player(); // first player joining is assigned as master player.isMaster = (this.clients.length === 1); if (this.allowReconnectionTime > 0) { player.connected = true; } this.state.players[client.sessionId] = player; } public onMessage(client: Client, message: any) { message.sessionId = client.sessionId; this.broadcast(message, { except: client }); } public async onLeave(client: Client, consented: boolean) { // master is leaving, let's assign a new master. if (this.state.players[client.sessionId].isMaster) { const availableSessionIds = Object.keys(this.state.players).filter((sessionId) => sessionId !== client.sessionId); const newMasterSessionId = availableSessionIds[Math.floor(Math.random() * availableSessionIds.length)]; this.state.players[newMasterSessionId].isMaster = true; } if (this.allowReconnectionTime > 0) { this.state.players[client.sessionId].connected = false; try { if (consented) { throw new Error('consented leave'); } await this.allowReconnection(client, this.allowReconnectionTime); this.state.players[client.sessionId].connected = true; } catch (e) { delete this.state.players[client.sessionId]; } } } }
- endel administrator last edited by
I've made some slight changes to this room and made it available here:
It is not exposed in the
colyseusmodule yet, maybe eventually it will!
|
https://discuss.colyseus.io/topic/279/does-colyseus-support-host-client-online-mode
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Content calendars are an essential part of any content management toolkit. They display your content in a familiar calendar view, making it easy to see when content is being published.
Scheduling is also a breeze. Want to change when an item is published? Simply drag the item to the new date and you’re done.
A content calendar is a publisher-specific view of content. Traditionally, such a view would be part of the content management app. But with Contentful we don’t have to follow tradition. Rather than creating content management Swiss Army knives, we can create streamlined, uncluttered, fit-for-purpose interfaces.
In this article, we’ll step through how to create a content calendar of our Contentful content. We’ll create a focussed single-page app that uses the RiotJS library and Bulma CSS framework to create the interface. It makes use of the Contentful Content Management API Javascript SDK to both read and update our content.
We’ll be building it for a single content type called article.
Catering for future publishing
Content calendars provide future publishing views of your content. They allow you to say: I want to publish this content on this date.
Out-of-the-box, Contentful doesn’t support future publishing. So, we will need to create a new property,
publishDate, to hold our future publishing date. When that date becomes due, we can then publish the item, either manually or via a triggered process.
Adding the new
publishDate property
Go to the Contentful app
Select Content model in the top menu
Select the content type you wish to display in the calendar (in this case article)
Click on Add field
Select Date and time
Enter Publish Date in the Name field on the pop-up dialog
Click Create
With our
publishDate property in place, we can now start building the calendar.
Creating our calendar app
We’re going to use RiotJS to build the interface. I like Riot’s ‘simple and elegant component’ approach. Of course, you might prefer React, Angula, Ember, etc. But the interactions between the web app and Contentful will be largely the same.
Our interface consists of two main components:
Pipeline - lists all draft articles that are yet to be assigned a future publishing date
Calendar - standard calendar view containing articles that have been assigned a publishing date
All draft articles can be dragged between the pipeline and the calendar, where:
dragging an article to the pipeline removes the
publishDate, and
dragging to a future date only sets the
publishDate.
We’ll include published articles in the calendar so that we can see what’s been published recently. However, published articles will not be be draggable.
The pipeline and calendar components are wrapped in an app component just to keep things nice and tidy. It also houses our drag functionality. More on that later.
Status in Contentful
It’s going to be useful to understand how status is worked out in Contentful. If you look at your content in the Contentful app, you’ll see Draft, Published, Updated and Archived.
Each entry in Contentful has a set of system properties and a set of user-defined properties, the fields that you define in the content model. Contentful does not use a dedicated status property but determines an entry’s status by checking a number of system properties:
If the entry has no
publishedAtdate (or
publishedVersion) then it’s in Draft
If the entry has a
publishedAtdate and version is the same as
publishedVersionthen it’s Published
If the entry has a
publishedAtdate and version is not the same as
publishedVersionthen it’s Updated. Note, the entry is still published, it’s just that the latest updates have not been published
If the entry has an
archivedDatethen it is Archived.
We need to keep this in mind when getting the data for our components.
The Pipeline
The pipeline component lists Draft articles that have not yet been assigned a future publishing date.
is a pretty typical of a Riot component:
A template used to generate the HTML output
Localised CSS style statements
Script for localised logic and functionality
In the template, we’re actually calling another component to output the pipeline articles.
allows us to reuse the same layout for both pipeline and calendar items.
items is a JSON array of our pipeline articles. The
each attribute is a loop function: Riot will output a
content-entry component for each member of the
items array.
The content entry
It’s worth taking a quick look at the content entry component. It has no script but does add some vital attributes to the item:
<content-entry> <article class="card {published: opts.item.sys.publishedAt}" id={opts.id}> <header> <p class="is-size-6"> {opts.item.fields.title['en-US']} </p> </header> </article> <style scoped> ... </style> </content-entry>
It’s just outputting a simple Bulma card that only contains the article’s
title. Importantly, though, it adds a
published class if the article has a
sys.publishedAt property. We’ll be using this later when implementing the drag-and-drop update of the
publishDate property.
Getting the data into the pipeline
We’re taking advantage of Riot’s event-driven functionality to get the pipeline data into our app. Here’s how it works.
If you look in app.js you’ll find the following:
bus.on('load_pipeline', function(){ // make a raw request to get the pipeline entries - no publishDate, no publishedAt client.rawRequest({ method: 'GET', url : CONFIG.space_id + '/entries/?&content_type=' + CONFIG.content_type + '&sys.publishedAt[exists]=false&fields.publishDate[exists]=false&order=fields.title' }) .then(function(data){ bus.trigger('pipeline_loaded', data.items); }) .catch(console.error); });
This function:
Traps the
load_pipelineevent raised when our pipeline component is first built.
Uses the Contentful Management Javascript SDK to grab articles from our Contentful space that don’t have both a
publishedAtproperty (i.e. are Draft) and a publishDate property.
Triggers the
pipeline_loaded event, passing the articles returned by Contentful.
We use the
rawRequest method of the SDK as we can use the returned JSON directly when updating our pipeline output.
The pipeline component traps the
pipeline_loaded event and uses the passed articles to update itself:
bus.on('pipeline_loaded', function(items){ self.items = items; self.update(); });
This event-driven approach allows us to keep our components relatively simple and easier to reuse.
The Calendar
The calendar uses the same event-driven approach to build and load the calendar data.
bus.on('calendar_loaded', function(items){ self.calendar = buildCalendar(items); self.update(); });
The
buildCalendar function creates a JSON representation of a calendar (months and days). It then appends articles with a defined
publishDate property, or that have already been published, to the appropriate day.
The result is the following JSON structure:
{ weeks: [ days: [ { iso: …, /* ISO formatted date */, local: …, /* date in local format */, day: [1-31], /* day of month */ month_name: “JAN”, /* abbreviated Month name */ entries: [], /* array of items where publishDate or publishedAt equals this day */ }, ] ] }
Obviously, we have a maximum of 7 items in the
days array. The number of
weeks depends on the range we are displaying. The default is 4 weeks prior to the current date and 12 weeks after the current date.
The calendar data is then used to update the component, using the following template:
<div id="calendar"> <table> <thead> <tr> <th width="14.25%">Sunday</th> <th width="14.25%">Monday</th> <th width="14.25%">Tuesday</th> <th width="14.25%">Wednesday</th> <th width="14.25%">Thursday</th> <th width="14.25%">Friday</th> <th width="14.25%">Saturday</th> </tr> </thead> </table> <div id="calendar-days"> <table> <tbody> <tr each={calendar.weeks} <td width="14.25%" each={day, index in days}<span if={day.day==1||index==0}>{day.month_name} </span>{day.day}</span> <content-entry each={day.entries} item={this} id={sys.id}></content-entry> </td> </tr> </tbody> </table> </div> </div>
The output is split into two tables: one for headings and one for the actual days. This allows the days to be scrolled whilst keeping the headings stationary.
The template creates a table row for each
week and a table cell for each day. Each table cell will contain any articles that are being published on that day. The data attributes
data-date-iso and
data-date-local are added to help with updating the
publishDate when an article is dragged to a new position.
Enabling drag-and-drop changing of the publish date
The
publishDate can be updated by editing the article in the Contentful app. But it would be much easier to just drag it to a new day in the calendar.
To do that, we’ll use the excellent Dragula javascript library. But first let’s consider what rules we need.
Published articles cannot be dragged (has a class of published)
Articles cannot be dropped on past dates (target must have a class of current)
Articles can be dropped in the pipeline - this removes the
publishDate(target has a class of pipeline-items)
Now let’s translate these into Dragula:
// set up drag let drake = dragula({
Our calendar table’s cells have a day class. Our pipeline articles are wrapped in a
div with the pipeline-items class. So, let’s restrict drag-and-drop to articles that are children of these containers:
isContainer: function (el) { return el.classList.contains('day') || el.classList.contains('pipeline-items'); },
Now let’s narrow down where articles can be dropped. The target must have a current class (calendar table cells for today onwards) or a pipeline-items class:
accepts: function (el, target, source, sibling) { return target.classList.contains('current') || target.classList.contains('pipeline-items'); },
And finally lets prevent articles with a published class from being selected at all:
invalid: function (el, handle) { return el.classList.contains('published'); }
Now let’s handle the dropping of an articles. Let’s start by checking if we actually need to do anything:
drake.on('drop', function(el,target,source,sibling){ if (target===source) return;
When an article is dropped onto a future day, let’s update the
publishDate. The article’s
id is on the element being dropped (
el). The new
publishDate value is stored as a data attribute on the table cell (
target):
if (target.classList.contains('current')){ // moving to a day - change publish date let entry_id = el.getAttribute('id') let publish_date = target.getAttribute('data-date-iso') bus.trigger('set_publish_date',{id:entry_id, publish_date:publish_date}); }
And when an article is dropped back into the pipeline then let’s clear its
publishDate:
if (target.classList.contains(‘pipeline-items’)){ // clear publish date let entry_id = el.getAttribute('id'); bus.trigger('set_publish_date',{id:entry_id, publish_date:null}); } });
In both cases, we trigger a
set_publish_date event. This is trapped by our app.js which uses the Javascript SDK to update the article:
bus.on('set_publish_date',function(data){
First we get a reference to our Contentful space and environment:
client.getSpace(CONFIG.space_id) .then(function(space){ return space.getEnvironment(CONFIG.environment_id); })
Now we can get the article using the passed
id:
.then(function(entry){ let publish_date = {'en-US': data.publish_date} entry.fields.publishDate = publish_date; return entry.update(); })
We’ve got the article so let’s update the
publishDate:
.then(function(entry){ console.log('Entry ' + entry.sys.id + ' updated.'); }) .catch(console.error); });
Let’s log the update or any error that’s occurred:
.then(function(entry){ console.log('Entry ' + entry.sys.id + ' updated.'); }) .catch(console.error); });
Authenticating with the Content Management API
The calendar needs to include an API token when making requests to the Content Management API. There are a couple of different ways of getting a token, depending on how you want to use your app.
Personal Access Tokens
You can create Personal Access Tokens in the Contentful app. They effectively give all users of the app the same permissions as you.
To create a Personal Access Token:
Open the Contentful app
Select Space Settings in the top menu
Select API Keys in the drop-down list
In the API screen, select the Content management tokens tab
Click on Generate personal token
In the pop-up dialog give your token a name, click on Generate
Your token will be generated and displayed in the response. Copy it into your config file.
Remember this token will be in your client-side code. You should not use it in publicly available apps.
OAuth tokens
The other option is to use OAuth tokens. These grant the same permissions to your app as the Contentful user. When the user opens your application the following steps takes place:
App checks if a token exists
If none is found the user is redirected to the Contentful OAuth endpoint
User logs into Contentful and grants permissions
Contentful redirects back to the app with a token in url
App retrieves the token and stores it for use with Content Management API
Your app can persist the local storage of the token to prevent having to go through the process every time the app is opened.
Before you can use OAuth tokens you need to register your application with Contentful.
When you’ve registered your app, update
config.js with the Client ID and the Redirect URI.
If you don’t specify a Personal Access Token in the
config.js file then the calendar app will use OAuth tokens.
Publishing content
A content calendar is about future publishing. To really complete the process we want to automate the publishing of articles based on their
publishDate property.
This can be done by a regularly scheduled cloud function that publishes articles that have an appropriate
publishDate.
Here’s an example in Python that uses the Contentful Management SDK:
Import the relevant libraries
from contentful_management import Client from datetime import datetime, timedelta
Set some constants: master is the default environment id.
TOKEN = 'add your-CMA-token here' SPACE_ID = 'add your-space-id here' ENVIRONMENT_ID = 'master' FREQUENCY = 24 # hours between runs of this function CONTENT_TYPE = 'article' # which content type are we working with
Create the Contentful Management client
client = Client(TOKEN)
Get the entries with a
publishDate that is between now and the last time function ran.
def getEntries(): now = datetime.utcnow() # publish dates are stored in UTC last = now - timedelta(hours=FREQUENCY) comp_from = last.strftime('%Y-%m-%dT%H:%m:%s') comp_to = now.strftime('%Y-%m-%dT%H:%m:%s') print('Getting entries with publish_date between {} and {}'.format(comp_from,comp_to)) return client.entries(SPACE_ID,ENVIRONMENT_ID).all({ 'content_type' : CONTENT_TYPE, 'fields.publishDate[gte]' : comp_from, 'fields.publishDate[lte]' : comp_to })
Loop through the returned
articles and publish if necessary.
def publishEntries(): entries = getEntries() print('Found {} entries...'.format(len(entries))) for entry in entries: if entry.is_published: print('{} already published'.format(entry.title)) else: entry.publish() print('Published {}'.format(entry.title))
This is a very basic approach. You might want to add checking that linked entries and assets are also published and either reporting exceptions or publishing them too.
The benefit of fit-for-purpose applications
The content calendar is a great example of an app that provides limited functionality to a specific audience.
Traditional content management systems generally achieve this via plugins. This increases the complexity of the interface, introduces potential conflicts with other plugins and can negatively impact upgrade paths.
Contentful, with its excellent API support, allows us to quickly and easily build small, agile standalone apps. They are easier to learn, easier to use and easier to maintain.
The content manager’s toolkit is no longer a swag of competing plugins. But a suite of complementary, fit-for-purpose apps.
|
https://www.contentful.com/blog/2019/01/24/create-content-calendar-contentful/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
cerebro.plot() does not work with Anaconda / IPython
It appears that the cerebro.plot() command only works in Jupyter, but not in IPython (similarly here). I get the following error:
<IPython.core.display.Javascript object>
<IPython.core.display.HTML object>
Any suggestions on how to overcome the issue? Thanks!
- backtrader administrators last edited by
Not being even a beginner in the usage of
IPython
Anaconda has nothing to do with this. It's a distribution.
Jupyter is the graphical frontend and as such meant for plotting
IPython is focused on interactive Python, i.e.: it is a console.
Try plotting disabling the
ipythonautomatic detection when calling
plot
Thanks for the explanation. cerebro.plot(iplot=False) alongside changing the graphics backend in Spyder (Tools/Preferences/IPython Console/Graphics) now allows me to plot it via the IPython console.
//edit: When I select "inline", it seems like it worked once, but now the figure window freezes. When I select "Automatic" or anything else, I get the error message "ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'qt5' is currently running". Any solution to this?
//edit2: The freeze can be delayed by adding plt.pause(10) after cerebro.plot(iplot=False). In that case, i can at least see and 'use' the figure. I would still be happy about comments on it. Thanks!
having the same problems here, and no kidding, it was killing me.
I spent hours and hours to figure the solution and even pycharm can't help. ( Don't know why )
I tried subclass the "backtrader.plot.Plot" but still not working.
If you really want to use spyder, here is the trick :
import backtrader.plot import matplotlib matplotlib.use('QT5Agg') # Your running code cerebro.plot(iplot= False)
Remember to select your Graphic backend to "Qt5" in spyder.
You should see the windows pop out (It can even be maximizing ! how great...)
I am using win10 , Anaconda , Spyder 3.3.2 , Py 3.7.1, Ipyhon 7.2.0
(User from Hong Kong. Thanks for the platform, it's really great.)
Possible fix:
Thank you - this was very annoying, but your solution worked! :)
|
https://community.backtrader.com/topic/1911/cerebro-plot-does-not-work-with-anaconda-ipython?_=1614109874231
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
A.
This helps isolate your environments for different projects from each other and from your system libraries.
Virtual environments are sufficiently useful that they probably should be used for every project. In particular, virtual environments allow you to:
virtualenv is a tool to build isolated Python environments. This program creates a folder which contains all the necessary executables to use the packages that a Python project would need.
This is only required once. The
virtualenv program may be available through your distribution. On Debian-like distributions, the package is called
python-virtualenv or
python3-virtualenv.
You can alternatively install
virtualenv using pip:
$ pip install virtualenv
This only required once per project. When starting a project for which you want to isolate dependencies, you can setup a new virtual environment for this project:
$ virtualenv foo
This will create a
foo folder containing tooling scripts and a copy of the
python binary itself. The name of the folder is not relevant. Once the virtual environment is created, it is self-contained and does not require further manipulation with the
virtualenv tool. You can now start using the virtual environment.
To activate a virtual environment, some shell magic is required so your Python is the one inside
foo instead of the system one. This is the purpose of the
activate file, that you must source into your current shell:
$ source foo/bin/activate
Windows users should type:
$ foo\Scripts\activate.bat
Once a virtual environment has been activated, the
python and
pip binaries and all scripts installed by third party modules are the ones inside
foo. Particularly, all modules installed with
pip will be deployed to the virtual environment, allowing for a contained development environment. Activating the virtual environment should also add a prefix to your prompt as seen in the following commands.
# Installs 'requests' to foo only, not globally (foo)$ pip install requests
To save the modules that you have installed via
pip, you can list all of those modules (and the corresponding versions) into a text file by using the
freeze command. This allows others to quickly install the Python modules needed for the application by using the install command. The conventional name for such a file is
requirements.txt:
(foo)$ pip freeze > requirements.txt (foo)$ pip install -r requirements.txt
Please note that
freeze lists all the modules, including the transitive dependencies required by the top-level modules you installed manually. As such, you may prefer to craft the
requirements.txt file by hand, by putting only the top-level modules you need.
If you are done working in the virtual environment, you can deactivate it to get back to your normal shell:
(foo)$ deactivate
Sometimes it's not possible to
$ source bin/activate a virtualenv, for example if you are using mod_wsgi in shared host or if you don't have access to a file system, like in Amazon API Gateway, or Google AppEngine. For those cases you can deploy the libraries you installed in your local virtualenv and patch your
sys.path.
Luckly virtualenv ships with a script that updates both your
sys.path and your
sys.prefix
import os mydir = os.path.dirname(os.path.realpath(__file__)) activate_this = mydir + '/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this))
You should append these lines at the very beginning of the file your server will execute.
This will find the
bin/activate_this.py that
virtualenv created file in the same dir you are executing and add your
lib/python2.7/site-packages to
sys.path
If you are looking to use the
activate_this.py script, remember to deploy with, at least, the
bin and
lib/python2.7/site-packages directories and their content.
From Python 3.3 onwards, the venv module will create virtual environments. The
pyvenv command does not need installing separately:
$ pyvenv foo $ source foo/bin/activate
or
$ python3 -m venv foo $ source foo/bin/activate
Once your virtual environment has been activated, any package that you install will now be installed in the
virtualenv & not globally. Hence, new packages can be without needing root privileges.
To verify that the packages are being installed into the
virtualenv run the following command to check the path of the executable that is being used :
(<Virtualenv Name) $ which python /<Virtualenv Directory>/bin/python (Virtualenv Name) $ which pip /<Virtualenv Directory>/bin/pip
Any package then installed using pip will be installed in the
virtualenv itself in the following directory :
/<Virtualenv Directory>/lib/python2.7/site-packages/
Alternatively, you may create a file listing the needed packages.
requirements.txt:
requests==2.10.0
Executing:
# Install packages from requirements.txt pip install -r requirements.txt
will install version 2.10.0 of the package
requests.
You can also get a list of the packages and their versions currently installed in the active virtual environment:
# Get a list of installed packages pip freeze # Output list of packages and versions into a requirement.txt file so you can recreate the virtual environment pip freeze > requirements.txt
Alternatively, you do not have to activate your virtual environment each time you have to install a package. You can directly use the pip executable in the virtual environment directory to install packages.
$ /<Virtualenv Directory>/bin/pip install requests
More information about using pip can be found on the PIP topic.
Since you're installing without root in a virtual environment, this is not a global install, across the entire system - the installed package will only be available in the current virtual environment.
Assuming
python and
python3 are both installed, it is possible to create a virtual environment for Python 3 even if
python3 is not the default Python:
virtualenv -p python3 foo
or
virtualenv --python=python3 foo
or
python3 -m venv foo
or
pyvenv foo
Actually you can create virtual environment based on any version of working python of your system. You can check different working python under your
/usr/bin/ or
/usr/local/bin/ (In Linux) OR in
/Library/Frameworks/Python.framework/Versions/X.X/bin/ (OSX), then figure out the name and use that in the
--python or
-p flag while creating virtual environment.
The.
If you are using the default
bash prompt on Linux, you should see the name of the virtual environment at the start of your prompt.
(my-project-env) [email protected]:~$ which python /home/user/my-project-env/bin/python.
Fish shell is friendlier yet you might face trouble while using with
virtualenv or
virtualenvwrapper. Alternatively
virtualfish exists for the rescue. Just follow the below sequence to start using Fish shell with virtualenv.
Install virtualfish to the global space
sudo pip install virtualfish
Load the python module virtualfish during the fish shell startup
$ echo "eval (python -m virtualfish)" > ~/.config/fish/config.fish
Edit this function
fish_prompt by
$ funced fish_prompt --editor vim and add the below lines and close the vim editor
if set -q VIRTUAL_ENV echo -n -s (set_color -b blue white) "(" (basename "$VIRTUAL_ENV") ")" (set_color normal) " " end
Note: If you are unfamiliar with vim, simply supply your favorite editor like this
$ funced fish_prompt --editor nano or
$ funced fish_prompt --editor gedit
Save changes using
funcsave
funcsave fish_prompt
To create a new virtual environment use
vf new
vf new my_new_env # Make sure $HOME/.virtualenv exists
If you want create a new python3 environment specify it via
-p flag
vf new -p python3 my_new_env
To switch between virtualenvironments use
vf deactivate &
vf activate another_env
Official Links:
A.
Sometimes the shell prompt doesn't display the name of the virtual environment and you want to be sure if you are in a virtual environment or not.
Run the python interpreter and try:
import sys sys.prefix sys.real_prefix
Outside a virtual, environment
sys.prefix will point to the system python installation and
sys.real_prefix is not defined.
Inside a virtual environment,
sys.prefix will point to the virtual environment python installation and
sys.real_prefix will point to the system python installation.
For virtual environments created using the standard library venv module there is no
sys.real_prefix. Instead, check whether
sys.base_prefix is the same as
sys.prefix.
|
https://sodocumentation.net/python/topic/868/virtual-environments
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Syntax: vector_name.insert (position, val) Syntax: vector_name.insert(position, size, val) Syntax: vector_name.insert(position, iterator1, iterator2) Use std::deque for this, that's what it was designed for.
We will learn about Vectors. Vectors are same as dynamic arrays with the ability to resize. We can declare a vector using this statement;. We need to import the vector library to use vector in our program.
Here, I am declaring an integer vector v. To insert values into a vector, we have to use the push_back function. Here, I am iterating from 1 to 5 and pushing ith value in the vector. We can access the values just like we do in arrays.
Let’s run this code, we can see that all the values in the vector are printed. To print the size of a vector, we can use the size function. We can use the capacity function to returns the size of the storage space currently allocated to the vector expressed as number of elements To check whether a vector is empty or filled, we use the empty function.
It returns 1 is the vector is empty. To print the maximum size of array, we use the max_size function. Let’s run this code, we can see that the size of vector is 5, its capacity is 8 and clearly the vector is not empty, this is the maximum size this vector can store.
To print the first and last element of the array, we use the front and back operators. To print the ith element of the array, we use the at function. Let’s run this code, we can see that the output is as expected. So, this brings us to the end of this video tutorial.
C++ vector append
Best way to append vector to vector
c++ vector::insert at index
vector<>::insert is designed to add elements so it’s the most adequate solution.
You could call
reserve on the destination, vector to reserve some space, but unless you add a lot of vector together, it’s likely that it won’t provide many benefits:
vector<>::insert know how many elements will be added, you will avoid only one
reserve call.
Erase vector c++
std::remove() instead:
#include <algorithm> ... vec.erase(std::remove(vec.begin(), vec.end(), 8), vec.end());
std::vector<int> vec; vec.push_back(6); vec.push_back(-17); vec.push_back(12); // Deletes the second element (vec[1]) vec.erase(vec.begin() + 1); Or, to delete more than one element at once: // Deletes the second through third elements (vec[1], vec[2]) vec.erase(vec.begin() + 1, vec.begin() + 3);
C++ vector::insert at end
Push_back() function is used to push elements into a vector from the back. The new value is inserted into the vector at the end, after the current last element and the container size is increased by 1. 1. Strong exception guarantee – if an exception is thrown, there are no changes in the container.
Yes, it is well defined. Assume if vector is empty,
begin() equals to
end(). The effects is it inserts a copy of element before iterator.
§ Table 100 — Sequence container requirements (in addition to container)
|------------------------------------------------------------------| |a.insert(p,t) | iterator Requires:T shall be CopyInsertable into X. For | | | vector and deque, T shall also be CopyAssignable. | | | Effects: Inserts a copy of t before p.
Vector push_back c++
std::vector manages its own memory. That means that, when the destructor of a vector is invoked the memory held by the vector is released.
std::vector also invokes an object’s destructor when it is removed (through
erase,
pop_back,
clear or the vector’s destructor).
When you do this:
Radio newradio(radioNum); m_radios.push_back(newradio);
Method 1
std::vector<int> vec = { 1 }; vec.push_back(2); vec.push_back(3); vec.push_back(4); vec.push_back(5);
Method 2
std::vector<int> vec = { 1 }; int arr[] = { 2,3,4,5 }; vec.insert(std::end(vec), std::begin(arr), std::end(arr));
|
https://epratap.com/cpp-vector-insert/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Automated Transparent Genetic Feature Engineering or ATgfe
Project description
ATgfe (Automated Transparent Genetic Feature Engineering)
What is ATgfe?
ATgfe stands for Automated Transparent Genetic Feature Engineering. ATgfe is powered by genetic algorithm to engineer new features. The idea is to compose new interpretable features based on interactions between the existing features. The predictive power of the newly constructed features are measured using a pre-defined evaluation metric, which can be custom designed.
ATgfe applies the following techniques to generate candidate features:
- Simple feature interactions by using the basic operators (+, -, *, /).
(petalwidth * petallength)
- Scientific feature interactions by applying transformation operators (e.g. log, cosine, cube, etc. as well as custom operators which can be easily implemented using user defined functions).
squared(sepalwidth)*(log_10(sepalwidth)/squared(petalwidth))-cube(sepalwidth)
- Weighted feature interactions by adding weights to the simple and/or scientific feature interactions.
(0.09*exp(petallength)+0.7*sepallength/0.12*exp(petalwidth))+0.9*squared(sepalwidth)
- Complex feature interactions by applying groupBy on the categorical features.
(0.56*groupByYear0TakeMeanOfFeelslike*0.51*feelslike)+(0.45*temp)
Why ATgfe?
ATgfe allows you to deal with non-linear problems by generating new interpretable features from existing features. The generated features can then be used with a linear model, which is inherently explainable. The idea is to explore potential predictive information that can be represented using interactions between existing features.
When compared with non-linear models (e.g. gradient boosting machines, random forests, etc.), ATgfe can achieve comparable results and in some cases over-perform them. This is demonstrated in the following examples: BMI, Rational difference and IRIS.
Results
Generated
Classification
Regression
Requirements
- Python ^3.6
- DEAP ^1.3
- Pandas ^0.25.2
- Scipy ^1.3
- Numpy ^1.17
- Sympy ^1.4
Install ATgfe
pip install atgfe
Upgrade ATgfe
pip install -U atgfe
Usage
Examples
The Examples are grouped under the following two sections:
Generated examples test ATgfe against hand-crafted non-linear problems where we know there is information that can be captured using feature interactions.
Toy Examples show how to use ATgfe in solving a mix of regression and classification problems from publicly available benchmark datasets.
Pre-processing for column names
ATgfe requires column names that are free from special characters and spaces (e.g. @, $, %, #, etc.)
# example def prepare_column_names(columns): return [col.replace(' ', '').replace('(cm)', '_cm') for col in columns] columns = prepare_column_names(df.columns.tolist()) df.columns = columns
Configuring the parameters of GeneticFeatureEngineer
GeneticFeatureEngineer( model, x_train: pandas.core.frame.DataFrame, y_train: pandas.core.frame.DataFrame, numerical_features: List[str], number_of_candidate_features: int, number_of_interacting_features: int, evaluation_metric: Callable[..., Any], minimize_metric: bool = True, categorical_features: List[str] = None, enable_grouping: bool = False, sampling_size: int = None, cv: int = 10, fit_wo_original_columns: bool = False, enable_feature_transformation_operations: bool = False, enable_weights: bool = False, enable_bias: bool = False, max_bias: float = 100.0, weights_number_of_decimal_places: int = 2, shuffle_training_data_every_generation: bool = False, cross_validation_in_objective_func: bool = False, objective_func_cv: int = 3, n_jobs: int = 1, verbose: bool = True )
model
ATgfe works with any model or pipeline that follows scikit-learn API (i.e. the model should implement the
fit() and
predict() methods).
x_train
Training features in a pandas Dataframe.
y_train
Training labels in a pandas Dataframe to also handle multiple target problems.
numerical_features
The list of column names that represent the numerical features.
number_of_candidate_features
The maximum number of features to be generated.
number_of_interacting_features
The maximum number of existing features that can be used in constructing new features.
These features are selected from those passed in the
numerical_features argument.
evaluation_metric
Any of the scitkit-learn metrics or a custom-made evaluation metric to be used by the genetic algorithm to evaluate the predictive power of the newly generated features.
import numpy as np from sklearn.metrics import mean_squared_error def rmse(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred))
minimize_metric
A boolean flag, which should be set to
True if the evaluation metric is to be minimized; otherwise set to
False if the evaluation metric is to be maximized.
categorical_features
The list of column names that represent the categorical features. The parameter
enable_grouping should be set to
True in order for the
categorical_features to be utilized in grouping.
enable_grouping
A boolean flag, which should be set to
True to construct complex feature interactions that use
pandas.groupBy.
sampling_size
The exact size of the sampled training dataset. Use this parameter to run the optimization using the specified number of observations in the training data. If the
sampling_size is greater than the number of observations, then ATgfe will create a sample with replacement.
cv
The number of folds for cross validation. Every generation of the genetic algorithm, ATgfe evaluates the current best solution using k-fold cross validation. The default number of folds is 10.
fit_wo_original_columns
A boolean flag, which should be set to
True to fit the model without the original features specified in
numerical_features. In this case, ATgfe will only use the newly generated features together with any remaining original features in
x_train.
enable_feature_transformation_operations
A boolean flag, which should be set to
True to enable scientific feature interactions on the
numerical_features.
The pre-defined transformation operators are listed as follows:
np_log(), np_log_10(), np_exp(), squared(), cube()
You can easily remove from or add to the existing list of transformation operators. Check out the next section for examples.
enable_weights
A boolean flag, which should be set to
True to enable weighted feature interactions.
weights_number_of_decimal_places
The number of decimal places (i.e. precision) to be applied to the weight values.
enable_bias
A boolean flag, which enables the genetic algorithm to add a bias to the expressions generated. For example:
0.43*log(cement) + 806.8557595548646
max_bias
The value of the bias will be between
-max_bias and
max_bias.
If the
max_bias is 100 then the bias value will be between -100 and 100.
shuffle_training_data_every_generation
A boolean flag, if enabled the
train_test_split method in the objective function uses the generation number as its random seed. This can prevent over-fitting.
This option is only available if
cross_validation_in_objective_func is set to
False.
cross_validation_in_objective_func
A boolean flag, if enabled the
train_test_split method will not be used in the objective function. Instead of using
train_test_split, the genetic algorithm will use cross validation to evaluate the generated features.
The default number of folds is 3. The number of folds can modified using the
objective_func_cv parameter.
objective_func_cv
The number of folds to be used when
cross_validation_in_objective_func is enabled.
verbose
A boolean flag, which should be set to
True to enable the logging functionality.
n_jobs
To enable parallel processing, set
n_jobs to the number of CPUs that you would like to utilise. If
n_jobs is set to -1, all the machine's CPUs will be utilised.
Configuring the parameters of fit()
gfe.fit( number_of_generations: int = 100, mu: int = 10, lambda_: int = 100, crossover_probability: float = 0.5, mutation_probability: float = 0.2, early_stopping_patience: int = 5, random_state: int = 77 )
number_of_generations
The maximum number of generations to be explored by the genetic algorithm.
mu
The number of solutions to select for the next generation.
lambda_
The number of children to produce at each generation.
crossover_probability
The crossover probability.
mutation_probability
The mutation probability.
early_stopping_patience
The maximum number of generations to be explored before early the stopping criteria is satisfied when the validation score is not improving.
Configuring the parameters of transform()
X = gfe.transform(X)
Where X is the pandas dataframe that you would like to append the generated features to.
Transformation operations
Get current transformation operations
gfe.get_enabled_transformation_operations()
The enabled transformation operations will be returned.
['None', 'np_log', 'np_log_10', 'np_exp', 'squared', 'cube']
Remove existing transformation operations
gfe.remove_transformation_operation accepts string or a list of strings
gfe.remove_transformation_operation('squared')
gfe.remove_transformation_operation(['np_log_10', 'np_exp'])
Add new transformation operations
np_sqrt = np.sqrt def some_func(x): return (x * 2)/3 gfe.add_transformation_operation('sqrt', np_sqrt) gfe.add_transformation_operation('some_func', some_func)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/atgfe/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
C++ Palindrome Check for a given Number
Hello Everyone!
In this tutorial, we will learn how to check if the given Number is Palindrome or not, in the C++ programming language.
Condition for a Number to be Palindrome:
A number which is equal to its reverse.
Steps to check if the Number is Palindrome:
Compute the reverse of the given number.
If the number is equal to its reverse, it is Palindrome else it is not.
Code:
#include <iostream> #include <math.h> using namespace std; //Returns true if the given number is a Palindrome number bool isPalindrome(int n) { int reverse = 0; //to store the reverse of the given number int remainder = 0; int n1 = n; //storing the original number for comparing later //logic to compute the reverse of a number while (n != 0) { remainder = n % 10; reverse = reverse * 10 + remainder; n /= 10; } if (reverse == n1) return true; else return false; } int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to determine if the entered number is Palindrome or not ===== \n\n"; //variable declaration int n; bool palindrome = false; //taking input from the command line (user) cout << " Enter a positive integer : "; cin >> n; //Calling a method that returns true if the number is Palindrome palindrome = isPalindrome(n); if (palindrome) { cout << "\n\nThe entered number " << n << " is a Palindrome number."; } else { cout << "\n\nThe entered number " << n << " is not a Palindrome number."; } cout << "\n\n\n"; return 0; }
Output:
Let's try another input,
We hope that this post helped you develop better understanding how to check if the given number is Palindrome or not in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : )
|
https://studytonight.com/cpp-programs/cpp-palindrome-check-for-a-given-number
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
MQTTTranslate
MQTTTranslate uses the ReconnectingMqttClient library (minimum version required v1.1.1) to deliver PJON packets over TCP on local network (LAN) as a MQTT protocol client. It may be useful to connect PJON networks and more standard applications to each other using the MQTT protocol. This strategy works in one of four modes. The first two modes enable to implement a PJON bus via MQTT, the first mode is "closed" and the second is "open" to use by non-PJON programs. The last two modes are for behaving like MQTT devices normally do.
MQTTT_MODE_BUS_RAWmode sends binary JSON packets delivered to a topic like
pjon/device45(where
45is a receiver device id). Each device subscribes to a topic with its own name and receives packets like any other PJON strategy. This mode requires that all senders and receivers are linked with PJON for encoding/decoding, so other systems are not easily connected. The directory examples/WINX86/Local/MQTTTranslate/PingPong contains examples for windows, to build it, open the solution file in Visual Studio 2017. The directory examples/ARDUINO/Local/MQTTTranslate/PingPong contains the Arduino examples, to build them, just use the Arduino IDE.
MQTTT_MODE_BUS_JSONmode sends JSON packets with to, from and data, delivered to a topic like
pjon/device45(where
45is a receiver device id). Each device subscribes to a topic with its own name and receives packets like
{to:45,from:44,data:"message text sent from device 44 to device 45"}.
MQTTT_MODE_MIRROR_TRANSLATEmode does not not use JSON encapsulation of values, and publishes to its own topic, not the receiver's. It publishes to an "output" folder and subscribes to an "input" folder. An outgoing packet with payload
P=44.1,T=22.0results in the topics
pjon/device44/output/temperature, with a value
22.0and
pjon/device44/output/pressure, with a value
44.1. Likewise, when receiving an update of
pjon/device44/input/setpoint, with a value
45results in a packet with payload
S=45. This mode supports a translation table to allow short names to be used in packets while topic names are longer. For example
Ttranslated in
temperature. If no translation table is populated, the same names will be used in the packets and the topics. The directory examples/ESP8266/Local/MQTTTranslate/EnvironmentController contains the ESP8266 example, to build it, just use the Arduino IDE.
MQTTT_MODE_MIRROR_DIRECTmode works like
MQTTT_MODE_MIRROR_TRANSLATE, but just passes the payload on without any translation, leaving the formatting to the user. It does not split packets into separate topics but transfers the packets as-is to one output topic and from one input topic
pjon/device44/output,
pjon/device44/input. The user sketch will have control of the format used, which can be plain text like
P=44.1,T=22.0or a JSON text. The directory examples/ARDUINO/Local/MQTTTranslate/SWBB-MQTT-Gateway contains the Arduino SWBB-MQTT-Gateway example, that showcases bidirectional, transparent data transmission between an MQTT client and a SoftwareBitBang bus. To build it, just use the Arduino IDE.
The "Translate" in the strategy name is because a translation table can be used to translate PJON packet contents to MQTT topics and back. This is to enable PJON packets to remain small
t=44.3 between devices with limited memory, while the MQTT packets are made more explicit
temperature to support longer name syntax in external systems.
MAC address usage
The topic names like
pjon/device45/output/temperature in the two MIRROR modes can be replaced with topic names containing the MAC address of the Ethernet/WiFi card of the device, like
pjon/DACA7EEFFE5D/output/temperature. This is selected by setting the
MQTTT_USE_MAC preprocessor definition.
This gives the option to flash the same sketch without modifications to a lot of devices that will all appear in dedicated topics, to enable plug and play.
Note that this functionality does not cover Windows/Linux/OsX in this release.
Configuration
Before including the library it is possible to configure
MQTTTranslate using predefined constants:
Use
PJONMQTTTranslate to instantiate an object ready to communicate using
MQTTTranslate strategy:
#include <PJONMQTTTranslate.h> // Include the PJON library // Use MQTTTranslate strategy with PJON device id 44 PJONMQTTTranslate bus(44); uint8_t broker_ip[] = { 127, 0, 0, 1 }; void setup() { // Sets the broker's ip, port and topic used bus.strategy.set_address(broker_ip, 1883, "receiver"); }
This document is automatically generated from the github repository. If you have noticed an error or an inconsistency, please report it opening an issue here
Updated on 06 November 2020 at 16:15:12
>.
|
https://www.pjon.org/MQTTTranslate.php
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
TrackBy with *ngFor in Angular 9 : Example
This tutorial guides you on how to use TrackBy with *ngFor in Angular 9 application. We also learn how the performance of ngFor can be increased by using TrackBy function in our component class.
TrackBy with *ngFor in Angular 9
Before you get in to on how to use TrackBy with *ngFor in Angular, you should know what impact it would create using them in the angular application.
For example, when you add, move or remove items or elements in the iterating, the *ngFor directive must re-render the appropriate DOM nodes. This will impact the performance in DOM, if you re-render all the nodes including the one which is not changed. Therefore, to improve the performance only the nodes that have changed has to be re-rendered.
When you use this TrackBy function, the *ngFor directive uses the result of this TrackBy function to identify the item/element node that is changed instead of identifying the whole object (jsonArray) itself.
This function has two arguments, the iteration index(index of the element) and the associated node/element data (element itself).
For example, the TrackBy function that you need to create in your component looks like below.
trackJsonArrayElement(index: number, element: any){ return element ? element.uid: null; }
Where “uid” is the field used to identify each item/element node. Note, because the value of “uid” field won’t change when the reference changes, angular would identify them and apply the optimization.
And this function returns unique value of the element i.e., uid. You can also return hash of the element instead of uid.
Example : TrackBy with *ngFor
Now, let’s get in to the implementation part. You need to implement TrackBy function (trackJsonArrayElement()) as shown below in the component class.
mycomponent.component.ts
import { Component } from '@angular/core'; @Component({ selector:'app-mycomp', templateUrl:'./mycomponent.component.html', }) export class MycomponentComponent { json', }, ] trackJsonArrayElement(index: number, element: any){ return element ? element.uid: null; } }
And we need to tell ngFor directive that trackJsonArrayElement() function is used to track the element. Therefore, you need to use this function with ngFor directive as shown below.
<li *{{element.username}}, {{element.uid}}, {{element.age}}</li>
Your component template should look like below.
mycomponent.component.html
<p>This is My First Component</p> <ul> <li * {{element.username}}, {{element.uid}}, {{element.age}} </li> </ul>
And use the component selector ‘app-mycomp‘ in your parent component (app.component.html).
app.component.html
<div class="container"> <div class="row"> <div class="col-xs-12"> <h3>My First Component !</h3> <hr> <app-mycomp></app-mycomp> </div> </div> </div>
Finally, when you run the above example you should see the following output in the browser.
Output
That’s it, you had learnt how to use track by with *ngFor in Angular 9 application. Hope it helped 🙂
Also See:
-
- String Interpolation in Angular 9
-
|
https://www.sneppets.com/angular/trackby-with-ngfor-in-angular-9-example/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
I am creating an adapter for a third party module by following the Magento_Braintree structure.
In the Braintree module, they have a factory
BraintreeAdapterFactory that creates an adapter
BraintreeAdapter and passes some arguments. I have duplicated the same structure for my module. However, I am getting empty arguments in the
__construct() even if I try hard coding values in my factory.
I added a
di.xml in an attempt to fix it, but still nothing unless I hardcode values for my arguments. I have also made sure to compile, clean cache and delete files from
/generated.
Am I missing something?
Factory:
namespace Vendor\Module\Model\Adapter; use Vendor\Module\Model\Config; use Magento\Framework\ObjectManagerInterface; /** * This factory is preferable to use for Module adapter instance creation. */ class ModuleAdapterFactory { /** * @var ObjectManagerInterface */ private $ objectManager; /** * @var Vendor\Module\Model\Config */ private $ config; /** * @param ObjectManagerInterface $ objectManager * @param Config $ config */ public function __construct( ObjectManagerInterface $ objectManager, Config $ config ){ $ this->objectManager = $ objectManager; $ this->config = $ config; } /** * Creates instance of Contentul Adapter. * * @return ContentulAdapter */ public function create() { return $ this->objectManager->create( ModuleAdapter::class, [ 'accessToken' => $ this->config->getAccessToken(), 'spaceId' => $ this->config->getSpaceId() ] ); } }
Adapter:
namespace Vendor\Module\Model\Adapter; use ThirdParty\Delivery\Client; /** * Class ModuleAdapter * Use \Vendor\Module\Model\Adapter\ModuleAdapterFactory to create new instance of adapter. * @codeCoverageIgnore */ class ModuleAdapter { /** * @var Client */ private $ client; /** * @param string $ accessToken * @param string $ spaceId */ public function __construct($ accessToken, $ spaceId) { $ this->client = new Client($ accessToken, $ spaceId); } }
di.xml:
<?xml version="1.0"?> <config xmlns: <type name="Vendor\Module\Model\Adapter\ModuleAdapter"> <arguments> <argument name="accessToken" xsi:</argument> <argument name="spaceId" xsi:</argument> </arguments> </type> <!
|
https://extraproxies.com/magento-2-empty-arguments-in-adapter-__construct/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
LCD_AnimInit_TypeDef Struct Reference
LCD Animation Configuration.
#include <em_lcd.h>
LCD Animation Configuration.
Field Documentation
◆ enable
Enable Animation at end of initialization.
◆ AReg
Initial Animation Register A Value.
◆ AShift
Shift operation of Animation Register A.
◆ BReg
Initial Animation Register B Value.
◆ BShift
Shift operation of Animation Register B.
◆ animLogic
A and B Logical Operation to use for mixing and outputting resulting segments.
◆ startSeg
Number of first segment to animate.
|
https://docs.silabs.com/gecko-platform/latest/emlib/api/efm32gg11/struct-l-c-d-anim-init-type-def
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Startup without peripheral libraries series
In order to configure the ATSAME54P20A, we have to talk about the clock system. The SAME5x/SAMD5x family has one of the more complicated clocking systems I have seen in a microcontroller (the following figure is taken from the SAM-D5XE5X Family Datasheet, page 144):
There are five clock sources:
- XOSC0 – for connecting an external oscillator
- XOSC1 – for connection an external oscillator
- DFLL – 48MHz digital frequency locked loop that can take an external 32kHz input for closed loop control or run without a reference in open loop control
- XOSC32K – for connecting an external 32.768kHz crystal
- OSCULP32K – ultra low power internal 32.768kHz oscillator
Each clock source can connect into any of the 12 Generic Clock Generators, that can divide the source clock, or directly into either of the two digital phase lock loops: DPLL0 and DPLL1. The DPLL’s multiply the clock source into any multitude of frequencies. The DPLL’s can also take a Generic Clock Generator as a reference. Plus, the Generic Clock Generators can take Generator 1 as a source and further divide it. Generator 0 is special and is always used as the CPU clock.
Once a clock source is connected to a generator, the generator is used to provide a clock signal to the peripherals. This complexity makes the clock system unbelievably flexible. All peripherals can run at different clock rates depending on need.
Now that we have discussed the clock system, the steps to getting the MCU running are similar to the STM32F446. Atmel Studio includes the startup code that sets up generic interrupts and jumps to the main loop. All that is required to access the device register headers is including “sam.h”.
The example I give configures the MCU to run with a 12MHz crystal on XOSC1. This is input into DPLL0 and DPLL1 and output at 120MHz and 200MHz respectively (TCC and PDEC can run at 200MHz). DPLL0 is connected to generic clock 0, DPLL1 is connected to generic clock 1 and generic clock 2. Generic Clock 2 divides DPLL1 by two to obtain a 100MHz clock (used by EVSYS, SERCOM, etc).
The startup code is as follows:
- Set up the clocks!
- Enable any peripheral clocks that are needed. I include ADC0 and EVSYS as an example.
- Enable the cache. The wait states are calculated automatically for the clock.
- Configure any peripherals. In this case, I set up SysTick at 1ms. The interrupt callback function is defined in the startup code and is appropriately named SysTick_Handler.
- The sysInit function is called before the main loop
#include "sam.h" void sysInit(void) { // // Enable clocks // // Run with a 12MHz external crystal on XOSC1 OSCCTRL->XOSCCTRL[1].bit.ENALC = 1; OSCCTRL->XOSCCTRL[1].bit.IMULT = 4; OSCCTRL->XOSCCTRL[1].bit.IPTAT = 3; OSCCTRL->XOSCCTRL[1].bit.ONDEMAND = 0; OSCCTRL->XOSCCTRL[1].bit.XTALEN = 1; OSCCTRL->XOSCCTRL[1].bit.ENABLE = 1; // Wait for OSC to be ready while (0 == OSCCTRL->STATUS.bit.XOSCRDY1); // Set up DPLL0 to output 120MHz using XOSC1 output divided by 12 - max input to the PLL is 3.2MHz OSCCTRL->Dpll[0].DPLLRATIO.bit.LDRFRAC = 0; OSCCTRL->Dpll[0].DPLLRATIO.bit.LDR = 119; OSCCTRL->Dpll[0].DPLLCTRLB.bit.DIV = 5; // 2 * (DIV + 1) OSCCTRL->Dpll[0].DPLLCTRLB.bit.REFCLK = 3; // use XOSC1 clock reference OSCCTRL->Dpll[0].DPLLCTRLA.bit.ONDEMAND = 0; OSCCTRL->Dpll[0].DPLLCTRLA.bit.ENABLE = 1; // enable the PLL // Wait for PLL to be locked and ready while(0 == OSCCTRL->Dpll[0].DPLLSTATUS.bit.LOCK || 0 == OSCCTRL->Dpll[0].DPLLSTATUS.bit.CLKRDY); // Set up DPLL1 to output 200MHz using XOSC1 output divided by 12 - max input to the PLL is 3.2MHz OSCCTRL->Dpll[1].DPLLRATIO.bit.LDRFRAC = 0; OSCCTRL->Dpll[1].DPLLRATIO.bit.LDR = 199; OSCCTRL->Dpll[1].DPLLCTRLB.bit.DIV = 5; // 2 * (DIV + 1) OSCCTRL->Dpll[1].DPLLCTRLB.bit.REFCLK = 3; // use XOSC1 clock reference OSCCTRL->Dpll[1].DPLLCTRLA.bit.ONDEMAND = 0; OSCCTRL->Dpll[1].DPLLCTRLA.bit.ENABLE = 1; // enable the PLL // Wait for PLL to be locked and ready while(0 == OSCCTRL->Dpll[1].DPLLSTATUS.bit.LOCK || 0 == OSCCTRL->Dpll[1].DPLLSTATUS.bit.CLKRDY); // Each GCLK has to be enabled and the divider set in a single 32-bit write // Connect DPLL0 to clock generator 0 (120MHz) - frequency used by CPU, AHB, APBA, APBB GCLK->GENCTRL[0].reg = GCLK_GENCTRL_SRC_DPLL0 | GCLK_GENCTRL_DIV(1) | GCLK_GENCTRL_GENEN; while (1 == GCLK->SYNCBUSY.bit.GENCTRL0); // Connect DPLL1 to clock generator 1 (200MHz) - frequency used by TCC, PDEC GCLK->GENCTRL[1].reg = GCLK_GENCTRL_SRC_DPLL1 | GCLK_GENCTRL_DIV(1) | GCLK_GENCTRL_GENEN; while (1 == GCLK->SYNCBUSY.bit.GENCTRL1); // DPLL1 to clock generator 2 and divide by 2 (100MHz) - frequency used by EVSYS, SERCOM, CAN, ADC, DAC GCLK->GENCTRL[2].reg = GCLK_GENCTRL_SRC_DPLL1 | GCLK_GENCTRL_DIV(2) | GCLK_GENCTRL_GENEN; while (1 == GCLK->SYNCBUSY.bit.GENCTRL2); // // Enable peripheral clocks // // EVSYS0 MCLK->APBBMASK.bit.EVSYS_ = 1; GCLK->PCHCTRL[11].reg = GCLK_PCHCTRL_GEN_GCLK2 | GCLK_PCHCTRL_CHEN; while (0 == GCLK->PCHCTRL[11].bit.CHEN); // ADC0 - gen 3 MCLK->APBDMASK.bit.ADC0_ = 1; GCLK->PCHCTRL[40].reg = GCLK_PCHCTRL_GEN_GCLK2 | GCLK_PCHCTRL_CHEN; while (0 == GCLK->PCHCTRL[40].bit.CHEN); // SERCOM1 - gen 3 MCLK->APBAMASK.bit.SERCOM1_ = 1; GCLK->PCHCTRL[8].reg = GCLK_PCHCTRL_GEN_GCLK2 | GCLK_PCHCTRL_CHEN; while (0 == GCLK->PCHCTRL[8].bit.CHEN); // // Set up Cache // CMCC->CTRL.bit.CEN = 1; // enable the cache // // Set up SysTick - 1ms // SysTick_Config(120000); }
|
https://www.brianchavens.com/2018/10/22/startup-without-peripheral-libraries-atsame54p20a/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Notary Service support for JWT
Project description
ns_jwt: JSON Web Tokens for Notary Service
We will use RS256 (public/private key) variant of JWT signing. (Source:). For signing, , NS is assumed to be in possession of a public-private keypair. Presidio can access the public key through static configuration or, possibly, by querying an endpoint on NS, that is specified in the token.
NS tokens carry the following claims:
For dates,.
Setup and configuration
No external configuration except for dependencies (PyJWT, cryptography, python-dateutil).
As above, use a virtual environment
virtualenv -p $(which python3) venv source venv/bin/activate pip install --editable ns_jwt pip install pytest
Testing
Simply execute the command below. The test relies on having
public.pem and
private.pem (public and private portions of an RSA key) to be present in the
tests/ directory. You can generate new pairs using
tests/gen-keypair.sh (relies on openssl installation).
pytest -v ns_jwt
Teardown and Cleanup
None needed.
Troubleshooting
CI Logon or other JWTs may not decode outright using PyJWT due to
binascii.Error: Incorrect padding and
jwt.exceptions.DecodeError: Invalid crypto padding. This is due to lack of base64 padding at the end of the token. Read it in as a string, then add the padding prior to decoding:
import jwt with open('token_file.jwt') as f: token_string = f.read() jwt.decode(token_string + "==", verify=False)
Any number of
= can be added (at least 2) to fix the padding. If token is read in as a byte string, convert to
utf-8 first:
jwt_str = str(jwt_bin, 'utf-8'), then add padding (Source:)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ns-jwt/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
carg man page
Prolog
Synopsis
#include <complex.h> double carg(double complex z); float cargf(float complex z); long double carg argument (also called phase angle) of z, with a branch cut along the negative real axis.
Return Value
These functions shall return the value of the argument in the interval [-π, +π].
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
cimag(), conj(), cproimag(3p), complex.h(0p), conj(3p), cproj(3p), creal(3p).
|
https://www.mankier.com/3p/carg
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
A view is a function supported by Python which accepts a web request and returns a web response. This web response can consist of any of the following:
Also Read: Django Application Life Cycle
* HTML content included in a web page.
* redirected web page
* not found error (i.e., 404 error)
* an XML document
* an image
and a few others.
In other words, you create a view that can be linked to a web page. To do so, you are required to link your view to the desired URL.
[post_middile_section_ad]
Django lets you create a view for an application by the use of views.py file.
Creation of a Simple View
Here, we will be creating the simplest of the view in your already created myapp application. The view will only display “Welcome to MyApp”.
The coding for this simple view creation is provided below:
from django.http import HttpResponse
def hello(request):
text = “””<h1>welcome to my app !</h1>”””
return HttpResponse(text)
In the above coding, we have included HttpResponse to provide the HTML code. As we are required to display the response as a web page only, that is why we need to map it to a URL.
Well, the above code is not the best way by which we can render HTML in an application’s view. The MVT pattern of Django makes it much easier. After the use of Django – MVT, we need to make use of a template, which is
Myapp/templates/hello.html
After the required changes, the new view will appear like the following:
from django.shortcuts import render
def hello(request):
return render(request, “myapp/template/hello.html”, {})
[post_middile_section_ad]
Passing Parameters to A View
It is possible for a view to have parameters which imply that a view accepts parameters. Whenever this view gets linked to a URL, it will display the number which is passed in it as the parameter.
from django.http import HttpResponse
def hello(request, number):
text = “<h1>welcome to my app number %s!</h1>”% number
return HttpResponse(text)
|
https://www.w3school.in/django-view-creation/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
International Business
Q. Describe the
advantages and disadvantages of joint venture as a mode of entry into foreign
market.
Ans. A joint venture means
any form of association that is jointly owned by two or more independent firms.
Its advantages are:
i.
Joint venture makes it possible to undertake a big project
requiring huge capital.
ii.
Joint venture permits a firm with limited resources to enter more
foreign markets.
iii.
The foreign partner is benefited from the local partner’s
knowledge of economic, social and political environment of the host country.
iv.
By entering into a joint venture agreement, the competitive
strength of a smaller firm is increased. It also benefits as its risk is shared
by the multinational company.
However,
the disadvantages of joint venture are:
i.
There is possibility of disclosure of trade secrets.
ii.
Dual ownership may lead to conflicts.
iii.
The foreign partner may not bring the latest technology because of
lack of full trust in the local partners.
iv.
It can only succeed when both the partners have something to offer
to the advantage of the other.
Q. What are the benefits
of international business to nations?
Ans. The benefits of
international business to nations are:
a.
Optimum use of resources
b.
Growth of economy
c.
Economies of large scale
d.
Increased employment opportunities
e.
Stabilisation of prices
f.
Increase in standard of living
g.
Enhancement of competition
h.
Global understanding
i.
Opportunity to import the essential goods.
Q. What is an IEC number?
Ans. IEC (Import Export Code)
number is issued by the Directorate General Foreign Trade (DGFT) or Regional
Export Licensing Authority for export/import documents.
Q. What do you mean by
EXIM Policy and who regulates it?
Ans. EXIM Policy means export and
import policy and it is regulated by the Central Government.
Q. What is entrepot
trade?
Ans. When goods are imported with
a view to re-export them, it is known as entrepot trade.
Q. How is Bill of Lading
different from Bill of Entry?
Ans. Bill of lading differs from
Bill of entry in following respects:
a.
Bill of lading is a document related to export transaction while
bill of entry is a document related to import transaction.
b.
Bill of lading is a receipt given
by the shipping company to the exporter for carrying the goods to the importer.
Bill of entry is a form supplied by the customs office to the importer for
assessment of customs duties.
Q. Briefly explain letter
of credit. Why does an exporter need this document?
Ans. A letter of credit may be
defined as a letter issued by the importer’s bank in favour of the exporter
containing an undertaking that the bills drawn by the exporter upon the
importer up to the amount specified therein will be honored by banker on
presentation..
Q. What is a green card
and why is it issued?
Ans. A green card is issued
to an exporter to reduce his transaction costs. It enables the eligible
exporter to avail the following facilities:
a.
Automatic issue of import licenses.
b.
Automatic customs clearance for exports.
c.
Automatic customs clearance for imports related to exports.
d.
Submission of legal undertaking in place of bank guarantee for the
issue of duty free licenses.
Q. What is UNCTAD? Why
was it formed?
Ans. UNCTAD stands for United
Nations Conference on Trade and Development and was formed in 1964. The
widening trade gap between the developed and developing countries, the general
dissatisfaction of the developing countries with the GATT and the need for
international economic cooperation led to the setting up of UNCTAD.
Q. What do you mean by
Export Processing Zone?
Ans. An Export Processing Zone (EPZ) is
an industrial estate usually situated near an international port and/or airport
with a view to encourage units meant for production or processing of export
items. The entire production of units is exported. The procedure is very simple
and speedy. It emphasises on processing and value addition
Q. What is the
significance of Special Import License (SIL)?
Ans. Special Import License
enables an exporter to import specified items to be used in the manufacture of
items meant for export.Certain specified categories of exporters have been
granted this facility.
Q. Define Special
Economic Zones.
Ans. Special Economic Zones
are specifically delineated duty free enclaves and shall be deemed to be
foreign territory for the purposes of trade operations, duties and tariffs.
They are set up to encourage free trade for the purpose of promotion of
exports. It is created by Indian government and goods forwarded to such a zone
are considered as "Deemed Exports”. Goods coming from SEZ are treated as import
goods.
Q. What do you mean by
certificate of origin?
Ans. Certificate of origin may be
defined as a document certifying that the goods under export contract have been
produced in the exporting country. The purpose of this certificate is to charge
customs at concessional rates if there is a trade agreement between importing
and exporting countries to charge customs at lower rate on each other’s goods.
It is issued by a Trade Council or some other authorised person.
Q. Name the organisations
that have been set up in the country by the government for promoting country’s
foreign trade.
Ans. Various organisations
have been set up in the country by the government for promoting country’s
foreign trade. They are:
1) Department of Commerce; 2) Export Promotion Councils; 3) Export
Inspection Councils;
4) Indian Trade Promotion Organisation; 5) Indian Institute of
Foreign Trade; 6) Indian Institute of Packaging; 7) State
Trading Organisation.
|
http://dynamic.ucoz.com/index/international_business_2/0-341
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
In my previous article, I gave a tutorial on how we can use Xamarin.iOS (formally known as Monotouch) to build iOS mobile applications using C#. In this blog post, I will introduce a third party library that can aid your mobile application development: sqlite-net.
Introducing sqlite-net (ORM)
The sqlite-net library provides simple, easy-to-use object relation mapping for the SQLite database. The API was designed specifically for mobile applications written in the .NET platform. The library is a single C# source file that is imported into your project with no additional dependences. Database operations can either be synchronous and asynchronous.
Table and Column Definitions
To define your tables, sqlite-net uses attributes on the domain model’s public properties. The minimal required for defining a table is the PrimaryKey attribute. The preferred data type for the primary key is an integer. By default, the table and column names will use the class and properties from the domain model for their names.
Let’s look at an example domain:
using SQLite; namespace Com.Khs.CommandRef.Model { [Table("category")] public class Category { [PrimaryKey] public long Id { get; set; } public string Description { get; set; } } }
When defining the model, the C# data types that sqlite-net supports are Integers, Booleans, Enums, Singles, Doubles, Strings, and DateTime. Here are a list of database attributes that define your table and columns:
- Table – Define a specific name for the table.
- Column – Define a specific name for the column.
- PrimaryKey – Define the primary key for the table.
- AutoIncrement – Guarantees the primary key as having a unique id value. The domain model property should be an integer.
- Indexed – Defines the column as an index.
- Ignore – Does not add the class property as a column in the table.
Initialize Database
When the iOS application begins to load, I create a database connection and initialize the tables during the FinishedLaunching method from the AppDelegate class. First, create the connection to the database using the SQLiteConnection or SQLiteAsyncConnection method. The CreateTable or CreateAsyncTable method will create a new table for the connection if it does not already exist in the database. The Connection property will be used by the application for accessing the database.
using SQLite; namespace Com.Khs.CommandRef { [Register ("AppDelegate")] public partial class AppDelegate : UIApplicationDelegate { public SQLiteConnection Connection { get; private set; } public override bool FinishedLaunching (UIApplication application, NSDictionary launcOptions) { InitializeDatabase(); return true; } protected void InitializeDatabase () { //Synchronous connection Connection = new SQLiteConnection(DbName); //Ansynchronous connection Connection = new SQLiteAsyncConnection(DbName); //Create Tables Connection.CreateTable<Category>(); Connection.CreateTable<Reference>(); Connection.CreateTable<User>(); Connection.CreateTable<Command>(); } public string DbName { get { return Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal), "commandref.db"); } } }
For the remainder of the blog, I will constrain my examples using only the synchronous database methods. If you want asynchronous operations, use the corresponding ‘Async’ method names. (As an example: using InsertAsync instead of Insert.)
CRUD Operations
Now that we have the connection created and tables initialized, we can now do some CRUD operations on the database.
Inserting data into your database is as simple as creating a new model object and calling either the Insert or InsertOrReplace method. The InsertOrReplace method will first delete the existing record if it exists, and then insert the new record. If the AutoIncrement is set on a primary key, the model will return with the new ID.
public void AddCategory(SQLiteConnection db) { //Single object insert var category = new Category { Description = "Test" }; var rowsAdded = Db.Insert(category); Console.WriteLine("SQLite Insert - Rows Added;" + rowsAdded); //Insert list of objects List<Category> categories = new List<Category>(); for ( int i = 0; i < 5; i++) { categories.Add( new Category { Description = "Test " + i }); } rowsAdded = Db.InsertAll(categories); Console.WriteLine("SQLite Insert - Rows Added;" + rowsAdded); }
The operations for update and delete work in similar way as the insert operation:
public void DeleteCategory(SQLiteCommand db, Category category) { //Single object delete var rowsDeleted = Db.Delete<Category>(category); Console.WriteLine("SQLite Delete - Rows Deleted: " + rowsDeleted); //Delete all objects rowsDeleted = Db.DeleteAll<Category>(); } public void UpdateCategory(SQLiteCommand db, Category category, List<Category> categories) { //Single object update var rowsUpdated = Db.Update(category); Console.WriteLine("SQLite Update - Rows Updated: " + rowsUpdated); //Update list of objects rowsUpdated = Db.UpdateAll(categories); Console.WriteLine("SQLite Update - Rows Updated: " + rowsUpdated); }
There are two options for querying the database, using predicates or low-level queries. When using the predicates option, the Table method is used. Additional predicates such as Where and OrderBy can be used to tailor the queries.
Let’s look at some examples:
public void QueryCategory(SQLiteCommand db) { //Query the database using predicates. //Return all the objects. var categories = Db.Table<Category>().OrderBy(c => c.Description); //Use Where predicate var category = Db.Table<Category>().Where(c => c.Description.Equals("Test")); //Use low level queries categories = Db.Query<Category>("select * from category where Description = ?", "Test"); }
To simplify the query statements, sqlite-net provides Find and Get methods. They will return single object matching the predicate. In the previous example, the query could have been written in the following way.
category = Db.Find(c => c.Description.Equals("Test"));
Additional Features
The sqlite-net also provides a simple transaction framework.
- BeginTransaction – Starts a new database transaction. Throws exception when a transaction is already started.
- SaveTransactionPoint – If a transaction is not started, then a new transaction will be created. Otherwise, set a new rollback point. The database will rollback to the last saved transaction point.
- Commit – Commits the current transaction.
- Rollback – Completely rolls back the current transaction.
- RollbackTo – Rollback to an existing save point set by the SaveTransactionPoint.
public void TransactionOperation() { Db.BeginTransaction( () => { // Do some database work. // Commits the transaction when done. }); //Another transaction call Db.BeginTransaction(); //Check that the transaction is still active if ( Db.IsInTransaction ) { //Close and commit the transaction Db.Commit(); } }
This article shows some of the capabilities of the sqlite-net library. If you would like to learn more about the sqlite-net, check it out on Github and see the code, examples, and wiki for more information. Good luck!
— Mark Fricke, asktheteam@keyholesoftware.com
|
https://keyholesoftware.com/2013/04/24/introduction-to-sqlite-net/
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
WBZ500 Continuous Mixing Plant For Sale
The continuous mixing plant is a type of plant that equipped with a continuous twin shaft mixer. Com…
1000L Concrete Mixer, Concrete Mixer,JS1000 Concrete Mixer
Introduction: JS1000 is twin shaft horizontal forced type mixer……
Concrete Trailer Pump For Sale
CamelWay concrete trailer pump has maximum output of 70 cubic meter per hour, equipped with self-dia…
20m3 h concrete batching plant concrete
20m3/h concrete batching plant concrete. 25 180 m /h concrete batching plant,concrete batch plant,concrete mixing plant,concrete mixer plant,concrete plant in china.good quality machines for concrete plant.
newest design concrete batching plant
2017 new design mobile 25m3/h concrete batching plant 2017 new design 25m3/h concrete mixing plant. electric, a, china mainland .source from a manufacturer mining 2017 new design yhzs25 mobile concrete 2017 new design
concrete batching plant aimix concrete
hzs series stationary concrete batching plant aimix hzs series stationary concrete mixing plant has a wide range of production capacity, from 25m3/h to 240m3/h.
electronic wet concrete mix plant ready
import china precast concrete plant from various high quality chinese 50m3 ready mix precast concrete batching plant for concrete mixing plant 25m3/h
hzs25 concrete batching plant 25m3 h
news asphalt batching plant|concrete
sap series asphalt batching plant / concrete batching plant own wide adaption range and ideal for middle and large projects. batch type asphalt plants for sale. we are professional asphalt batching plant/concrete mixing plant manufacturer.
25 m3/h china manufacturer mobile
china concrete batching plant, concrete. china concrete batching plant manufacturers new design mobile concrete batching plant for sale 90 m3/h concrete batching mixing plant with overseas service
alibaba concrete batching plant,concrete
a focus machinery co., ltd., experts in manufacturing and exporting concrete batching plant,concrete mixer and 1826 more products. a verified cn
small/mini concrete batching plant
hzs25 mini concrete batching plant plant capacity 25m3/h plant capacity 25 m³/h 35 if you need any types or capacity small concrete mixing plant,
|
http://happydiwaliimageswishes.co.in/concrete-plant/hsz310-concrete-batching-mixing-plant-25m3-h-manufacturer_182664.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
JS…
Truck-Mounted Concrete Pump For Sale
CamelWay truck-mounted concrete pump has 62m vertical reach with the height limit programmed to prev…
4m3 Concrete Truck
Work equipment that has a rotating cistern mounted on the frame, suitable for transporting concrete ……
WBZ300 …
used concrete pumps for sale | refurbished trailer &
see the largest inventory of concrete pumps for sale in the united states; concrete equipment? call jed alliance concrete pump, a machine that you
india export data and top export products from india
india export data from indian customs , indian exporters data, united states of india export trade intelligence. india exports are growing everyday and you
construction machinery trade shows, conferences and
constructionshows provides information on construction trade shows, one million earth moving machines will be construction shows is an event and news
content="china foreign trade, china import and export
foreign trade in december 2008 china begins 150 special export tariff on fertilizers countries such as the united states and australia approved
trump officials preparing for harder 'america first' line
may 08, 2018 trump readies tougher america first line for that when it comes to trade, the united states will joined the washington post in
powder mixing machine, powder mixing machine
powder mixing machine, speed import and export trade (jiangyin) co., ltd .. united states (1) vietnam (1) supplier types.
how trade with mexico impacts employment in
how trade with mexico impacts employment in the united states mexico is the united states second largest export market, trade between the united states
top class dry powder blending machine in united states
de germany, se sweden, uk united kingdom, dk denmark. evaluation of a new liquid fire extinguishing agent for united states government a new liquid fire extinguishing agent for the standard and approved extinguishing agent used for class d fires is a dry powder chinese immigrants in the united chinese immigration to the united states has top
side type asphalt blending station design
40 60 tph dry mortar blending plant in china60 tph asphalt blending station machine blending equipment in united states blending plant export to. we provide export services to sell in america logistics, transportation, legal, tax and and accounting advice, open companies in the united states
the 10 biggest exporting countries in the world | therichest
the 10 biggest exporting countries in the world just like any business, a country has to sell its products to earn money. here is a list of the 10 biggest exporting countries in the world. 10. united kingdom $479.2 billion the un . 287 shares. share on facebook tweet this reddit this share this email leave a comment. by sammy
trade information & trade news mclloyd business portal
mclloyd business portal, suppliers directory, import export, trade statistics .. india united states of america non woven fabric
how to start an import/export business entrepreneur
why are imports such big business in the united states and around the world? export trading company voice mail or answering machine;
germany technology wall putty manufacturing plant, dry
dry mix cement mortar machine for technology powder blending machine,masonry mortar mix mortar production with the famous company in united states,
buy class a cement high quality
class a cement trade offers directory and class a cement business offers machine tools accessories prime star import export [agent] united states
made in america machinery
what types of machines does the united states make? agricultural and food machinery includes equipment used to grow, but they are also an important u.s. export. in 2013, the united states exported $142.1 billion of machinery, 12 percent of total exports of manufactured goods. by contrast, machinery makes up only 7
biofuels food and agriculture organization of
expansion is linked to the high mandatory anhydrous ethanol blending ethanol and biodiesel trade in the next ten years biofuels united states european
world production of machine tools 2016 | statistic
retail & trade sports & recreation metal cutting machine tool orders united states 1996 2010 export value of machine tools from china 2016
u.s. allies react with bewilderment to trump s steel
looks likely to be slapped with tariffs on its steel exports to the united states .. 5 billion in trade with the united states, panels and washing machines.
tower type dry mortar blending plant in united states
a china ce certificate automatic enviromental saving dry mix mortar blending plant manufacturer export on alibaba express , find complete details about a china ce certificate automatic enviromental saving dry . united states dry mortar lab mortar mixer manufacturer. united states dry mixing mortar machine manufacturer asphalt mixing
rlb1000 asphalt blending plant in qatar
products from global blending plant cement blending plant hls120 sgs china gmo testing trade blending plant in united .. mixing machine export.
international trading companies building on the
international trading companies passage of the export trading company act of 1982provides new op the united states had a trade deficit of $2.28 billion,.
import statistics from the us census bureau
you are here economic programs overview foreign trade export statistics import statistics purpose . to provide detailed statistics on goods and estimates of services entering the u.s. from foreign countries. the united states code, title 13, requires this program. participation is mandatory. u.s. customs and border
building & construction trade shows in united states
building & construction united states trade shows, find and compare 827 expos, trade fairs and exhibitions to go reviews, ratings, timings, entry ticket fees, schedule, calendar, venue, editions, visitors profile, exhibitor information etc. list of 297 upcoming infrastructure expos in united states 2018 2019 1.
export.gov singapore trade regulations
trade regulations, sg/leftnav/trad/tradenet/list of controlled goods exports.html. the strategic trade ftas with the united states, asean
export & import trade in the bahamas geographia
bahamian exports of non oilmerchandise export & import trade in the bahamas .. foodstuffs account for about $250 million of the imports from the united states.
international trading companies building on the
northwestern journal of international law & business international trading companies building on the japanese model robert w. dziubla passage of the export trading company act of 1982provides new op
video how the economic machine works, according to
largest state exports range from common goods, how the economic machine and as the fifth most important private company in the united states by a
sales manager a fote heavy machinery
africa import export trading; powder grinding mill machine, cement equipment . pearl wu .. united states. more professionals named
cement industry in india trade perspectives
and blending it with soft clay .. world cement trade imports exports trade source itc, geneva united states is the largest trader of cement in the world,
bourbon whiskey wikipedia
u.s. exports of bourbon whiskey surpassed blending with other ingredients (and in various other countries that have trade agreements with the united states to
powder mixing machine, powder mixing machine
powder mixing machine, wholesale various high quality powder mixing machine products from global powder mixing machine suppliers and powder mixing machine factory,importer,exporter at alibaba .
china trade probe of us sorghum a 'normal'
that followed white house decisions to raise tariffs on some chinese made washing machines and solar exports if it wanted united states will go
merchandise trade statistics
explore south africa s merchandise trade statistics through interactive visualisations and other content; exports imports r united states (7.1 ) germany (7
rising tensions in the us china trade relationship
on april 3, 2018, the office of the us trade representative (ustr) published a proposed list of products imported from china to target with an additional 25
trade shows in chicago,trade fairs in chicago,chicago
chicago (united states) trade shows, find and compare 862 expos, trade fairs and exhibitions to go in chicago reviews, ratings, timings, entry ticket fees,
concrete product machines and solutions columbia machine
worldwide leader in design, manufacturing, and support of concrete product machines and solutions including batchers, mixers, and molds.
moment of truth study reveals high percentage of
moment of truth study reveals high percentage of illegal peruvian timber exports experts fear that the lack of transparency in peruvian timber exports will lead to the closure of international markets.
kimberley abattoir gets approval to export beef to the
kimberley abattoir gets approval to export beef to has just received approval to export beef to the united states .. going into the us for blending," he
biofuels in the united states context and outlook
biofuels in the united states context and outlook .. we have two way ethanol trade with brazil 10 ethanol blending tax credit
300m3/h concrete mixing station specifications export
mixing station export the united states united kingdom dry mixed mortar blending get smooth export concrete mix machine from united kingdom
united nations statistics division commodity trade
united nations commodity trade statistics database
company list business directory
company list. search search. search huntkey united states info email web phone dec 17th 2017 oxnard 735 west ventura right now trading limited hong kong info
royal white cement the world's white cement
royal white cement, inc. was established in 1999 in houston, texas. our tremendous growth is directly attributable to the high quality of our white portland cement,
import and export | doing business in ethiopia
import and export to and from this section contains articles on the import/export trade to and from and recently cement are goods that are imported to
90t/h dry powder blending machine dealers
90t/h dry powder blending machine mortar cement. powder silo top mounted united states easy operation dry equipment export dry mix mortar
fast facts about california mexico trade relations
fast facts about california mexico trade relations mexico's largest trade partners in 2014 are the united states (48.8 ); of all the states exports in
blending machine, blending machine suppliers and
blending machine, wholesale various wax making machine, liquid lubricant blending plant, blending machine for lube oil .. united states (10) philippines (6)
japan exports | 1963 2018 | data | chart | calendar
japan's main export partners are the united states (19 percent), china japan exports 1963 2018 | data trading economics members can view,
cement in the usa global cement
cement in the usa. 14 may 2012 the above summary data for the united states of america and its cement industry. cement trading economics website,
u.s. imports and exports components and statistics
u.s. imports and exports components and statistics what does the united states trade with foreign since the united states imports more than it exports,
mixing blending buyers & importers in usa
american mixing blending buyers directory provides list of traders and manufacturers at a united states 513 367 7200 lehigh hanson
marking of country of origin on u.s. imports | u.s
acceptable terminology and methods for markingevery article of foreign origin entering the united states must be except concrete marking of country of origin?
japan s manufacturing competitiveness strategy
japan s manufacturing competitiveness strategy challenges for japan, opportunities for the united states by jane corwin and rebecca puckett
the 10 biggest exporting countries in the world | therichest
the 10 biggest exporting countries in the which accounts for 11.6 percent of its exports, and the united states, other significant export trading partners
production of machine tools worldwide 2016 | statistic
retail & trade sports & recreation technology & telecommunications metal cutting machine tool orders united states 1996 2010 precision tools german exports 2011 2012 laser cutting machine market by product united states 2016 export value of machine tools from china 2016 nacco's worldwide revenue 2011 2015 german textile machinery global export
trump tariffs steel, aluminium, and the new trade
thus far, most u.s. trade adversaries
the pili nut of bicol, philippines in a nutshell, it s
may 10, 2012 de shelling a pili nut is an epic case of man versus machine, bicol region that give the pili nut its united states in trade winds bicol
import genius international trade databases for
import genius provides a web service to help companies involved in import export industry evaluate trading from customs agencies in the united states,
usa brazil industrial supply & trade manufacturers
brazil has considered the us to be its best foreign investor both on exports and imports. the united states trade agreements between the united states machine
durable dry powder blending machine in united states
durable dry powder blending machine in united states. buy powder blending machine high quality powder blending machine powder blending machineapplication and feaures both mixing vane and pail are made of stainless steel.easy to install disassemble and clean the mixing vane.mixing in sealed condition is safe and blending machine, blending machine suppliers and blending machine
global hardness testing machine market research
this report studies hardness testing machine in global revenue, consumption, import and export in these global market, especially in united states
cement blending plant south africa 2016inlacongress.in
most of the provinces as well as export in malvern pennsylvania united states machine,Manufacturer blending machine,industrial cement
democrats should steal trump's thunder on trade |
there s an old saying that even a broken clock is right twice a day. in that spirit, donald trump and his advisors are at least partly right about trade and tariffs.
which is relatively capital abundant united states
which is relatively capital abundant united states canada which is relatively capital abundant united ohlin trade model, the u.s. will export steel
construction machinery trade shows, conferences and
environmental goods and services international
environmental goods and services export opportunities and have become globally available through trade. global exports in environmental goods united states
which is relatively capital abundant united states
which is relatively capital abundant? united states canada capital 40 machines 10 machines labor 200 workers 60 workers answer the capital labor ratios are 1/5 and 1/6 for the united states and canada. since 1/5 is greater than 1/6, the u.s. is capital abundant. by the same reasoning, the labor capital ratio is higher in canada, so it is
20t/h dry powder blending machine from china
20 40t/h dry powder mixing machine export. 20 40t/h dry mortar mixing line in china. case,dry mortar mixing plant 5t/h to 10t/h dry mortar plant hot sale export semi automatic dry mortar 20 40t/h dry powder blending machine . 20t/h dry powder mixing machine in united. 20t/h dry mix mortar processing machine in united mortar manufacturer dry dry mortar machines$ to the united states
understanding u.s. mexico economic ties forbes
sep 26, 2016 given that mexico is the united states second largest export if trade between the united states and a washing machine built in
|
http://happydiwaliimageswishes.co.in/concrete-plant/cement-blending-machine-export-traders-in-united-states_227711.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Download source code for Understanding Code Coverage in Visual Studio Premium 2013
Code coverage is an important asset for a project development.It brings stability in the function of the code block under measure and make the code block stable.The more stability in the blocks of code, the more stability will come in the modules/use cases and henceforth a well balanced unit test cases can be written with less bugs and finally a stable version of the software/modules can be released in the production with least Mean Time To Failure(MTTF) which on the other hand ensures better project/product quality.In this article, we will look into various aspects of Code Coverage in Visual Studio Premium 2013 with a case study.
N.B.~ The Analyze Code Coverage is avaliable under Visual Studio Ultimate and Visual Studio Premium
It is a verification metric of determining how many lines of code in a given binary are measured when we run test cases against it. By analyzing the result of the code coverage of the Test Methods, we can figure out how much code has been tested for the specified test method. It also provides valuable information to the developer(s) as if the code block is partially, completely or not at all tested which on the other hand helps them to include more test cases to solidify the various boundaries under measure.By knowing how many lines of code of a certain code block are touched, we can judge how well the code/function is tested.
It also provides information about which lines of code have been executed and how long it took to execute them.
Fire up Visual Studio and create a Class Library project. Name it as UtilityLibrary and add a class by the name Utilities.cs and add the below code to it.
namespace UtilitiesLibraryProject
{
public class Utilities
{
/// <summary>
/// Function: CheckPositiveNegativeNumber
/// Purpose: Check if a number is positive or negative. If positive return 1 else 0
/// </summary>
/// <param name="number"></param>
/// <returns></returns>
public int CheckPositiveNegativeNumber(int number)
{
return number > 0 ? 1 : 0;
}
}
}
Next open a Console Application, add a reference to UtilityLibrary.dll to it and add the below piece of code
using System;
using UtilitiesLibraryProject;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Utilities objUtilities = new Utilities();
int n = 5;
if (objUtilities.CheckPositiveNegativeNumber(n) == 1)
Console.WriteLine("Positive Number");
else
Console.WriteLine("Negative Number");
Console.ReadKey();
}
}
}
If we run the application, the output will be "Positive Number".
Next, let us add a Unit Test Project
Add a reference to UtilityLibrary.dll to it and write the below piece of code
using Microsoft.VisualStudio.TestTools.UnitTesting;
using UtilitiesLibraryProject;
namespace UnitTestProject1
{
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestForNegativity()
{
Utilities objUtilities = new Utilities();
int inputValue = 0;
int expected = 0;
int actual;
actual = objUtilities.CheckPositiveNegativeNumber(inputValue);
Assert.AreEqual(expected, actual);
}
}
}
In the Test Method TestForNegativity, we are passing the value as "0" as input value to the CheckPositiveNegativeNumber function and we are checking the actual value with the expected value which we have already set to "0". Now, run the test case either by pressing Ctrl +R,T or by right clicking on the TestForNegativity Test Method as under
So our test case passed.Now we can start analyzing the Code Coverage either from Test > Analyze Code Coverage > Selected Tests/ All Tests
OR from Test Explorer
After the successful execution of the test(s), Visual Studio calculates the code coverage and displays it in "Code Coverage Results" window
So from the above diagram we can figure out how much code has been covered and it shows that 1 Bolck has not been covered. Hence the % block covered is not 100%. Now in-order to figure out what portions of the code block has been covered fully, partially or not at all, we can use the Show Code Coverage Coloring option of Code Coverage Results window as shown under
So let's click on that and we will get the below
Let us understand the various coloring's
But it is very difficult to understand from the piece of code being written as what has been covered and what not. So let us change our program as below
public int CheckPositiveNegativeNumber(int number)
{
if (number > 1)
return 1;
else
return 0;
}
Now let us re-run the Analyze Code Coverage for Selected Tests from Test Explorer and the Code Coverage Results is as under
Now let's click on the Show Code Coverage Coloring option and the result is as under
So it is clearly visible that we need to write unit test case for +ve numbers. So let's go ahead and add the below test case
[TestMethod]
public void TestForPositive()
{
Utilities objUtilities = new Utilities();
int inputValue = 4;
int expected = 1;
int actual;
actual = objUtilities.CheckPositiveNegativeNumber(inputValue);
Assert.AreEqual(expected, actual);
}
and run the both the test methods.The result is as under
Now select both the Tests and perform Analyze Code Coverage for Selected Tests as under
The result is as under
Code Coverage
Hope this will be helpful for writing better unit test cases and will bring more stability in the final product/project once it reaches to production environment. Thanks for reading. Zipped file attached
N.B.~ The program presented here is neither to make a comparison between Ternary Operator and If..Else construct nor to show present a bad/good way of writing a code (as mentioned earlier Code Coverage never tells us if the code is well written)
Latest Articles
Latest Articles from Rajnilari2015
Login to post response
|
http://www.dotnetfunda.com/articles/show/3228/understanding-code-coverage-in-visual-studio-premium-2013
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Enabling Metrics for the AWS SDK for Java
The AWS SDK for Java can generate metrics for visualization and monitoring with CloudWatch
your application’s performance when accessing AWS
the performance of your JVMs when used with AWS
runtime environment details such as heap memory, number of threads, and opened file descriptors
The AWS SDK Metrics for Enterprise Support is another option for gathering metrics about your application. SDK Metrics is an AWS service that publishes data to Amazon CloudWatch and enables you to share metric data with AWS Support for easier troubleshooting. See Enabling AWS SDK Metrics for Enterprise Support to learn how to enable the SDK Metrics service for your application.
How to Enable AWS SDK for Java Metric Generation
AWS SDK for Java metrics are disabled by default. To enable it for your local development environment, include a system property that points to your AWS security credential file when starting up the JVM. For example:
-Dcom.amazonaws.sdk.enableDefaultMetrics=credentialFile=/path/aws.properties
You need to specify the path to your credential file so that the SDK can upload the gathered datapoints to CloudWatch for later analysis.
If you are accessing AWS from an Amazon EC2 instance using the Amazon EC2 instance metadata service, you don’t need to specify a credential file. In this case, you need only specify:
-Dcom.amazonaws.sdk.enableDefaultMetrics
All metrics captured by the SDK for Java are under the namespace AWSSDK/Java, and are uploaded
to the CloudWatch default region (us-east-1). To change the region, specify it by using the
cloudwatchRegion attribute in the system property. For example, to set the CloudWatch region to
us-west-2, use:
-Dcom.amazonaws.sdk.enableDefaultMetrics=credentialFile=/path/aws.properties,cloudwatchRegion=us-west-2
Once you enable the feature, every time there is a service request to AWS from the
AWS SDK for Java,
metric data points will be generated, queued for statistical summary, and uploaded
asynchronously to
CloudWatch about once every minute. Once metrics have been uploaded, you can visualize
them using the
AWS Management Console
Available Metric Types
The default set of metrics is divided into three major categories:
- AWS Request Metrics
Covers areas such as the latency of the HTTP request/response, number of requests, exceptions, and retries.
- AWS Service Metrics
Include AWS service-specific data, such as the throughput and byte count for S3 uploads and downloads.
- Machine Metrics
Cover the runtime environment, including heap memory, number of threads, and open file descriptors.
If you want to exclude Machine Metrics, add
excludeMachineMetricsto the system property:
-Dcom.amazonaws.sdk.enableDefaultMetrics=credentialFile=/path/aws.properties,excludeMachineMetrics
More Information
See the amazonaws/metrics package summary for a full list of the predefined core metric types.
Learn about working with CloudWatch using the AWS SDK for Java in CloudWatch Examples Using the AWS SDK for Java.
Learn more about performance tuning in Tuning the AWS SDK for Java to Improve Resiliency
blog post.
|
https://docs.aws.amazon.com/en_us/sdk-for-java/v1/developer-guide/generating-sdk-metrics.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
public class SimpleMappingExceptionResolver extends AbstractHandlerExceptionResolver
HandlerExceptionResolverimplementation that allows for mapping exception class names to view names, either for a set of given handlers or for all handlers in the DispatcherPortlet.
Error views are analogous to error page JSPs, but can be used with any kind of exception including any checked one, with fine-granular mappings for specific String DEFAULT_EXCEPTION_ATTRIBUTE
public SimpleMappingExceptionResolver()
public void setExceptionMappings(Properties mappings)
javax.portet.Port.
Follows the same matching algorithm as RuleBasedTransactionAttribute and RollbackRuleAttribute.
mappings- exception patterns (can also be fully qualified class names) as keys, and error view names as values
RuleBasedTransactionAttribute,
RollbackRuleAttribute
public void setDefaultErrorView(String defaultErrorView)
Default is none.
public void setExceptionAttribute(String exceptionAttribute)
DEFAULT_EXCEPTION_ATTRIBUTE
protected ModelAndView doResolveException(PortletRequest request, MimeResponse response, Object handler, Exception ex)
doResolveExceptionin class
AbstractHandlerExceptionResolver
request- current portlet request
response- current portlet response
handler- the executed handler, or null if none chosen at the time of the exception (for example, if multipart resolution failed)
ex- the exception that got thrown during handler execution
protected String determineViewName(Exception ex, PortletRequest request)
"exceptionMappings", using the
"defaultErrorView"as fallback.
ex- the exception that got thrown during handler execution
request- current portlet request (useful for obtaining metadata)
nullif none found
protected String findMatchingViewName(Properties exceptionMappings, Exception ex)
exceptionMappings- mappings between exception class names and error view names
ex- the exception that got thrown during handler execution
nullif none found
setExceptionMappings(java.util.Properties)
protected int getDepth(String exceptionMapping, Exception ex)
0 means ex matches exactly. Returns -1 if there's no match. Otherwise, returns depth. Lowest depth wins.
Follows the same algorithm as
RollbackRuleAttribute.
protected ModelAndView getModelAndView(String viewName, Exception ex, PortletRequest request)
getModelAndView(viewName, ex).
viewName- the name of the error view
ex- the exception that got thrown during handler execution
request- current portlet request (useful for obtaining metadata)
getModelAndView(String, Exception)
protected ModelAndView getModelAndView(String viewName, Exception ex)
viewName- the name of the error view
ex- the exception that got thrown during handler execution
setExceptionAttribute(java.lang.String)
|
https://docs.spring.io/spring-framework/docs/4.3.4.RELEASE/javadoc-api/org/springframework/web/portlet/handler/SimpleMappingExceptionResolver.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
DNA Toolkit Part 5, 6 & 7: Open Reading Frames, Protein Search in NCBI database genome that codes for Homo Sapiens Insult protein to see our code in action.
Functions we will add:
- Reading frame generation.
- Protein Search in a reading frame (sub-function for the next function).
- Protein search in all reading frames.
Let’s take a look at how codons (DNA nucleotides triplets) form a reading frame. We just scan a string of DNA, match every nucleotide triplet against a codon table, and in return, we get an amino acid. We keep accumulating amino acids to form an amino acid chain, also called a polypeptide chain. Here is a nice image that shows a reading frame:
Also, here is a very nice explanation (audio) of what a reading frame is and why we need to form six of them for a proper protein search:
So let’s implement our reading frame generator to replicate the biological process that is performed by Ribisome in a living cell. We will reuse translation and reverse complement functions from our previous articles. In biology, Ribosome is an incredibly complex machine, but in the code, it is very simple:
def gen_reading_frames(seq): """Generate the six reading frames of a DNA sequence""" """including reverse complement""" frames = [] frames.append(translate_seq(seq, 0)) frames.append(translate_seq(seq, 1)) frames.append(translate_seq(seq, 2)) frames.append(translate_seq(reverse_complement(seq), 0)) frames.append(translate_seq(reverse_complement(seq), 1)) frames.append(translate_seq(reverse_complement(seq), 2)) return frames
- We create a list to hold lists of amino acids. This list will hold six lists.
- We add the first three reading frames, 5′ to 3′ end, by shifting one nucleotide in each frame. Our translation function accepts a string as a first argument and a start reading position as a second argument.
- We do the same operation three more times, but we generate the reverse complement of our sequence first.
We can now add a 9th output to our main.py file. It is a loop in this case as we need to print 6 lists.
print('[9] + Reading_frames:') for frames in gen_reading_frames(DNAStr): print(frames)
If we now run this function on the string below, here is what we should see:
GGGCGGCTCG ['G', 'R', 'L'] ['G', 'G', 'S'] ['A', 'A'] ['R', 'A', 'A'] ['E', 'P', 'P'] ['S', 'R']
The next function looks very convoluted but it is very simple in principle. It uses an amino acid list as an argument and scans it to see if it contains a START – M codon and a STOP – _ codon. When M codon is found (lines 13 – 16) we start accumulating every amino acid after that, until will come across a _ codon (lines 7 – 11). We have two lists here: current_prot[] holds a current protein, being accumulated and proteins[] holds all found proteins in a sequence. This is needed because an amino acid sequence may contain multiple START – STOP codons, resulting in multiple possible proteins in a single sequence. Using a debugger to scan this function line by line will help to understand this code better. I also cover debugging this function in my video version of this article here.
def proteins_from_rf(aa_seq): """Compute all possible proteins in an aminoacid""" """seq and return a list of possible proteins""" current_prot = [] proteins = [] for aa in aa_seq: if aa == "_": # STOP accumulating amino acids if _ - STOP was found if current_prot: for p in current_prot: proteins.append(p) current_prot = [] else: # START accumulating amino acids if M - START was found if aa == "M": current_prot.append("") for i in range(len(current_prot)): current_prot[i] += aa return proteins
In this case, we are not adding an output as this function is a sub-function for our next function and it only builds a protein from a single reading frame. Our next function will use this code and pass all six reading frames to it.
We can still do a quick test:
print(proteins_from_rf(['I', 'M', 'T', 'H', 'T', 'Q', 'G', 'N', 'V', 'A', 'Y', 'I', '_']))
And this protein sequence will be generated:
['MTHTQGNVAYI']
Let’s now add our final function, that is a part of a pipeline. This function uses a few previous functions to generate a list of proteins for us, and it accepts 4 arguments; a sequence, a start reading and stop reading positions, and a boolean flag that lets us sort the list from a longest to a shortest protein sequence.
def all_proteins_from_orfs(seq, startReadPos=0, endReadPos=0, ordered=False): """Compute all possible proteins for all open reading frames""" """Protine Search DB:""" """API can be used to pull protein info""" if endReadPos > startReadPos: rfs = gen_reading_frames(seq[startReadPos: endReadPos]) else: rfs = gen_reading_frames(seq) res = [] for rf in rfs: prots = proteins_from_rf(rf) for p in prots: res.append(p) if ordered: return sorted(res, key=len, reverse=True) return res
We start by checking if the reading position was provided (lines 5 – 8). If yes, we generate reading frames for a slice of the string, if not, we generate reading frames for the whole sequence. This allows us to pass a sequence and just specify (if needed) a slice of it to look for proteins in, instead of pre-formatting (slicing) a string before providing it to our function.
Lines 10 – 14 we scan all six reading frames and using our previous function to generate all possible proteins in a reading frame. We end up with a list res[] that contains all protein sequences, found in all six reading frames.
I mentioned a pipeline above as this function uses previous functions to produce the result. We start with a DNA sequence and we have this pipeline by using this function:
DNA -> Translation -> Reverse Complement -> Reading Frame Generation -> Protein Assembly
Now let’s add our final, 10th output loop:
print('\n[10] + All prots in 6 open reading frames:') for prot in all_proteins_from_orfs(DNAStr, 0, 0, True): print(f'{prot}')
The best way to test out the final function is to apply it to a real biological sequence. Homo sapiens insulin, variant 1. It can be found on NCBI database. If we look at a FASTA formatted file, we see this DNA sequence:
Let’s add it to our sequences.py file:
# NM_000207.3 Homo sapiens insulin (INS), transcript variant 1, mRNA NM_000207_3 = '\ AGCCCTCCAGGACAGGCTGCATCAGAAGAGGCCATCAAGCAGATCACTGTCCTTCTGCCAT\ GGCCCTGTGGATGCGCCTCCTGCCCCTGCTGGCGCTGCTGGCCCTCTGGGGACCTGACCCA\ GCCGCAGCCTTTGTGAACCAACACCTGTGCGGCTCACACCTGGTGGAAGCTCTCTACCTAG\ TGTGCGGGGAACGAGGCTTCTTCTACACACCCAAGACCCGCCGGGAGGCAGAGGACCTGCA\ GGTGGGGCAGGTGGAGCTGGGCGGGGGCCCTGGTGCAGGCAGCCTGCAGCCCTTGGCCCTG\ GAGGGGTCCCTGCAGAAGCGTGGCATTGTGGAACAATGCTGTACCAGCATCTGCTCCCTCT\ ACCAGCTGGAGAACTACTGCAACTAGACGCAGCCCGCAGGCAGCCCCACACCCGCCGCCTC\ CTGCACCGAGAGAGATGGAATAAAGCCCTTGAACCAGC'
The main page shows a protein sequence we should get from that DNA sequence:
/translation="
So let’s run our code and see if we can generate this protein:
print('\n[10] + All prots in 6 open reading frames:') for prot in all_proteins_from_orfs(NM_000207_3, 0, 0, True): print(f'{prot}')
And here is what we see:
[10] + All prots in 6 open reading frames:LVQHCSTMPRFCRDPSRAKGCRLPAPGPPPSSTCPTCRSSASRRVLGV MPRFCRDPSRAKGCRLPAPGPPPSSTCPTCRSSASRRVLGV MAEGQ ME
We can see that the first protein in our output indeed matches NCBI proposed protein, confirming the correctness of our code in every step of the pipeline.
Alright! This is it. Now we have a set of basic tools to work with DNA. Next, we will refactor this code into a nice reusable class and optimize some of the code. This might be overkill for an article, so if you are interested in wrapping this code into a class, feel free to watch parts 8 to 9, as they come out, here:
GitLab repository:
Until next time, rebelCoder, signing out.
|
https://rebelscience.club/2020/04/dna-toolkit-part-5-6-7-open-reading-frames-protein-search/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Update 2/26/2020 – due to the comments I’ve posted a github project that you can check out to help.
What.
0365
Seems like there is flickering before hiding.
Tom Daly
you have some trade offs here due to when app customizers load and caching. css vs js . Just remember all Microsoft code loads first, then it allows app customizers / webparts.
the least amount of flicker is to hide the bar with CSS and then show it with JavaScript. the css will cache but there is always a flicker.
Jake
That would be great, thanks very much for the response Tom.
Tom Daly
Just posted an update to the article w/ a link to a sample github project. let me know if you have any other questions!
Jake
Thanks so much Tom, this works great. Your help is much appreciated. 🙂
William
Hello, I am new to Sharepoint and trying to apply your tutorial. I dit all the MS Sharepoint tutorials you advised (create an extension with top and bottom placeholders) but i am stuck with following errors when running ‘gulp serve’ :
– ERROR TS2304: Cannot find name ‘IHeaderProps’
– ERROR TS2552: Cannot find name ‘Header’. Did you mean ‘Headers’ ?
I assume these need to be declared but how and where?
For the React and the ReactDom I have added:
import * as React from “react”;
import * as ReactDom from “react-dom”;
Do I need to understand that a file Header.tsx needs to be created ? If so, where to place this ?
Thank you for your help.
William
Hello,
We resolved this issue, no further action needed.
Thank you and regards,
W.
Jake
Hi William,
I’m not sure whether you will see this comment but I’m in similar position to yourself, could you perhaps expand on what you had to do to resolve these errors? Any help would be greatly appreciated.
Thanks
Tom Daly
Thanks for the feedback. The header is if you had a component you were creating in the application customizer. This was a snippet from my original POC. Tomorrow I’ll post a link to a sample github project with the complete solution.
Jake
Hi William,
I don’t know whether you will see this but I am in the same situation as yourself. I was wondering if you could perhaps elaborate on how you were able to resolve these errors?
Thanks
|
http://thomasdaly.net/2018/05/14/trimming-the-suite-bar-ribbon-on-modern-sharepoint-sites-in-office-365/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
DNP3 object point. More...
#include <app-layer-dnp3.h>
DNP3 object point.
Each DNP3 object can have 0 or more points representing the values of the object.
Definition at line 174 of file app-layer-dnp3.h.
Data for this point.
Definition at line 182 of file app-layer-dnp3.h.
Referenced by OutputJsonDNP3SetItem().
Index of point. If the object is prefixed with an index then this will be that value. Otherwise this is the place the point was in the list of points (starting at 0).
Definition at line 176 of file app-layer-dnp3.h.
Prefix value for point.
Definition at line 175 of file app-layer-dnp3.h.
Size of point if the object prefix was a size.
Definition at line 180 of file app-layer-dnp3.h.
|
https://doxygen.openinfosecfoundation.org/structDNP3Point__.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
#include <unistd.h> int fdatasync(int fildes);
The functionality shall be equivalent to fsync() with the symbol _POSIX_SYNCHRONIZED_IO defined, with the exception that all I/O operations shall be completed as defined for synchronized I/O data integrity completion.
In the event that any of the queued I/O operations fail, fdatasync() shall return the error conditions defined for read() and write().
The following sections are informative.
The Base Definitions volume of POSIX.1-2008, <unistd.h>
Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see .
|
https://man.linuxreviews.org/man3p/fdatasync.3p.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
4 years, 8 months ago.
Teensy 3.1 Teensy 3.2 USBSerial not Showing Anything + Its Fix
Dear all, I encountered problems with USBSerial using Teensy 3.1. I have been working with Teensy 3.1 with Arduino Teensyduino plugin for a long time. There was no problem with Arduino plugin. When mbed updated Teensy 3.1 as a supported platform + export to offline toolchains, I tried following code, but no printf appear on serial terminal. LED blinked properly
#include "mbed.h" #include "USBSerial.h" DigitalOut OnBoardLed (D13); USBSerial hPC; int main () { while (1) { OnBoardLed = 1; wait (0.5); hPC.printf ("ON\r\n"); OnBoardLed = 0; wait (0.5); hPC.printf ("OFF\r\n"); } }
The problem is that I am not seeing the printf message via terminal. I use YAT terminal. I can connect to the terminal via YAT, but nothing appear. LED is blinking OK. Why?
These are the procedure to reproduce the problem 1. Windows 8.1 PC, download and install driver from this link 2. The driver is not signed, but installed it any way 3. Teensy appear in Windows as Mbed Virtual Serial Port COM20, baud rate = 9600? 4. Export mbed project to EmBlock 5. Build the project using EmBlock installed ARM GCC EmBlock installed bare-metal compiler 6. Use TeensyLoader to load the HEX file to Teensy 3.1 7. Try terminal program YAT. It cannot open the port COM20 8. Try PuTTY. It cannot open the port COM20 9. Also tried mbed online compiler. LED worked, printf never worked
Fixes 1. Install Arduino + Teensyduino plugin + Teensy 3.1 driver as usual 2. There is no need to install driver from this link 3. The driver is NOT signed, cannot install unless turn off driver signature requirement 4. You must use Windows XP PC OR Windows 7 PC, no Windows 8 PC, no Windows 8.1 PC 5. Serial terminal show printf without problem
1 Answer
4 years, 8 months ago.
I have a Teensy 3.1; a similar test program produces a print output when I choose the correct port ( I use a Gtkterm linux terminal)
Did you check and install the driver ?
This was tested :
1)with a offline compiler GCC-ARM and a library around november 2015
2) now with the online compiler
I am pretty sure the problem is around Windows .
If you are familiar with the Arduino way , add Teensyduino and you can do the same test within the Arduino/ TeensyDuino. Look at and
Good luck
Yes, I have been using Teensy 3.1 via Arduino way. But I would like to switch to mbed way. But no luck in getting USBSerial to work. My PC is Windows8.1 Was your PC Windows7?posted by 11 Feb 2016
My PC is Linux (Ubuntu 14.04)...no driver needed
If you run a similar test on mbed and on Teensyduino, do you receive the printf output with the Arduino software?
Yes, everything from Arduino way is working. I need to find a Windows7 PC to try everything. Will update the postposted by 11 Feb 2016
You are connecting to the mbed com port? And did you update mbed library and USBDevice library to the latest version?posted by Erik - 11 Feb 2016
I made update to my initial question. Please look again. Thank uposted by WAI YUNG 11 Feb 2016
Can you try it with the online compiler? Just to rule that issue out. And COM20 is the Teensy? So if you unplug it, it is gone?posted by Erik - 11 Feb 2016
Yes. I also tried mbed online compiler. LED worked. Printf did not work. If I unplug Teensy 3.1, COM20 disappear. I can open COM20 in PuTTY, but nothing come out.posted by WAI YUNG 11 Feb 2016
I have updated the my initial post with a fix. It is a Windows 8 problem after all. Thank uposted by WAI YUNG 14 Feb 2016
Good to read you got it semi-solved. But it should work also on W8.1 (Worked for me, and now on W10 too). Only indeed the whole unsigned driver thing is a pain to go through the process.posted by Erik - 14 Feb 2016
|
https://os.mbed.com/questions/67913/Teensy-31-Teensy-32-USBSerial-not-Showin/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
meta data for this page
Media Manager
Namespaces
Choose namespace
Media Files
- Media Files
- Upload
- Search
Upload to servers
Sorry, you don't have enough rights to upload files.
File
- Date:
- 2016/04/26 23:10
- Filename:
- presentation_greening_through_it.pdf
- Size:
- 631KB
- References for:
- Nothing was found.
|
https://www.it.lut.fi/wiki/doku.php/courses/ct60a7000/spring2016/green/greening?tab_files=upload&do=media&tab_details=view&image=courses%3Act60a7000%3Apresentation_greening_through_it.pdf&ns=servers
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
#====================================================================# # The release file for RDF-Core # #====================================================================# Version 0.51 February 19, 2007 ---------------- - Makefile reporting dependencies in CPAN-compatible way - added basic test suite - minor fixes Version 0.50 July 14, 2006 ---------------- - a major bug in storing language tags and datatypes for literals fixed (bug 3148, thanks to Pierluigi Miraglia and Gregory Williams), it affects all types of storages (Memory, DB_File, Postgres). The new version is NOT compatible with old storages, so you will have to serialize your data, create new storages and parse it back. Version 0.40 March 2, 2006 ---------------- - some code cleanup (thanks to Gregory Williams) - a new method equals() added to literal object, datatype and laguage tag are taken into account (bug 2306) - escaping characters in rdf:about attribute fixed (bug 1630, thanks to Norman Walsh) - rdf:nodeID is produced by serializer when necessary, instead of incorrect deanonymizing resource Version 0.31 August 11, 2004 ---------------- - Typo in RDF::Core::NodeFactory fixed - RDF::Core::Model::Serializer callback functions can be overriden now Version 0.30 March 14, 2003 ---------------- Specification changes: - datatype and language information are handled wherever needed (parser, serializer, storages, literal object, NTriples output) - RDF/XML: rdf:nodeID attribute is handled in parser - RDF/XML: rdf:parseType="Literal" asserts typed literal - RDF/XML: rdf:parseType="Collection" attribute value is handled Fixed bugs: - rdf namespace is prepended to about attribute in serializer - Namespace/local value separation is kept where RDF::Core::Resource->clone() is called. (reported by Dan Berrios) Version 0.20 October 14, 2002 ---------------- - several bugs fixed - external variables binding supported in query (prepare() and execute() methods) - comments are allowed in query syntax - query language syntax changes for (hopefully) better readability Version 0.16 May 27, 2002 ---------------- - a second variable binding is added to query language (binds variable to property itself instead of property value) - member() - of container - function added to query language functions - a method "equals" is added to RDF::Core::Resource - a new module Schema is added, which provides access to RDF Schema - query tokenizer bug corrected (bug 854) Version 0.15 April 16, 2002 ---------------- - parser recognizes and handles xml:base attribute (bug 771) - parser bug with rdf:bagID fixed (bug 774) - parser bug with rdf:li fixed (bug 776) - query results are not returned in array of rows, a callback function is called for each row instead - query evaluation speed was improved by applying conditions as soon as possible - DB_File storage handles SIGINT to protect data consistency - DB_File storage has a new option Sync which says how often it should synchronize cache with disk Version 0.11 March 28, 2002 ---------------- - the damned README added (was missing in 0.10 again) Version 0.10 March 27, 2002 ---------------- - countStmts now works with DBMS storage (bug 768) - USE section added to query (defining namespace prefixes) - a binary relation not equal (!=) added to query language - BNF diagram of the query language added to Query pod - class operator (::) is allowed in From section of query now Version 0.04 March 22, 2002 ---------------- - README file added Version 0.03 March 21, 2002 ---------------- - blank (anonymous) nodes don't have URI, but _:<name> Version 0.01 October 3, 2001 ---------------- - original version; created by h2xs 1.20 Note: Bug references refer to .
|
http://web-stage.metacpan.org/changes/distribution/RDF-Core
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Angular 5 Service Worker
Subscribe On YouTube Manifest files and Service Workers. Using that technologies a PWA is able to close the gap between a classic web application and a desktop. or native mobile application.
In the following tutorial we’ll take a deeper look at the new Angular 5 Service Worker support and explore how to use and enable this feature in your next project.
With Angular 5 the development of Service Workers is becoming significantly easier. By using Angular CLI you can choose to add Service Worker functionality by default.
The Angular Service Worker functionality is provided by the module @angular/service-worker.
Starting From Scratch
To explore the Angular 5 Service Worker functionality let’s start from scratch. First let’s create a new project by using Angular CLI on your system. In order to create a new project we’re using Angular CLI 1.6 which is still in release candidate version. Using Angular CLI 1.5 (which is the current version) is not sufficient in this case as service worker support for Angular 5 is added in version 1.6. If you haven’t installed Angular CLI 1.6 on your system yet you can do so by using the following command:
$ npm install -g @angular/cli@next
The @next postfix is used together with the package name @angular/cli to indicate that version 1.6 should be installed. Once version 1.6 is released the @next postfix is no longer needed.
Having installed Angular CLI successfully you can check the version by using the following command:
$ ng --version
The output should correspond to what you can see in the following screenshot:
The screenshot shows that we’re using Angular CLI version 1.6.0-rc.0 for this tutorial.
Next, a new project can be created with by using command:
$ ng new angularpwa --service-worker
A new directory angularpwa is created, the project template is downloaded and dependencies are installed automatically. Furthermore the Angular 5 service worker functionality is activated and the package @angular/service-worker is installed as part of the dependencies.
You can check that the service worker activation was done by opening file .angular-cli.json and search for the following configuration setting:
"serviceWorker": true
This is telling Angular CLI to add a service worker when building the application.
Trying Out The Default Service Worker
Let’s try out the default service worker.
If you’re starting up the development web server with
$ ng serve --open
and check the Application tab in the Chrome Developer Tools you’ll notice that no service worker is active. The reason is that Angular CLI is not activating the servicer worker when we’re in development mode. Instead you first have to build your application for production by using:
$ ng build --prod
The production build of the application is made available in the dist subfolder. To make the content of the dist server available via a web server you can use any static web server like http-server.
The live-server project’s website can be found at. To install http-server globally on your system just use the following command:
$ npm install http-server -g
Now you can start http-server right inside the dist folder:
$ http-server
The following screenshot shows the output in the terminal:
The application is made available at and the page is loaded in the browser automatically, so that you should be able to see the following output:
If you now open up the Chrome Developer Tools you can see the active server worker on the Application tab.
If you scroll down to the Cache Storage section you can see that the storage is filled with all assets of our application.
With all assets in the browser cache we can now stop the web server (to simulate that the network connection to the server is not available:
Now try to reload the page in the browser. You’ll get the exact same result as before. The HTTP request is not fulfilled by the installed service worker with assets from the cache.
Taking A Look Into The Code
Now that you saw the Angular Service Worker in action let’s take a look at the code of our project. If you’re opening file src/app/app.module.ts you should be able to find the following source code:
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { ServiceWorkerModule } from '@angular/service-worker'; import { AppComponent } from './app.component'; import { environment } from '../environments/environment'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, environment.production ? ServiceWorkerModule.register('/ngsw-worker.js') : [] ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
First, let’s take a look at the import statement. ServiceWorkerModule is imported from the @angular/service-worker package. It’s needed to import that module to register the service worker (which is available in file ngsw-worker.js) for our application. The registration is done in the array which is assigned to the imports property of the @NgModule decorator with the following line of code:
environment.production ? ServiceWorkerModule.register('/ngsw-worker.js') : []
Important to note is that fact that the service worker registration is only done by calling ServiceWorkerModule.register(‘/ngsw-worker.js’) only if we’re in production mode (if environment.production is true).
Service Worker Configuration
The Service Worker script which is preinstalled (ngsw-worker.js) is a generic service worker which can be configured.
Here is the content of the default Service Worker configuration file which is available in file src/ngsw-config.json:
{ "index": "/index.html", "assetGroups": [{ "name": "app", "installMode": "prefetch", "resources": { "files": [ "/favicon.ico", "/index.html" ], "versionedFiles": [ "/*.bundle.css", "/*.bundle.js", "/*.chunk.js" ] } }, { "name": "assets", "installMode": "lazy", "updateMode": "prefetch", "resources": { "files": [ "/assets/**" ] } }] }
This JSON object contains two configuration properties on the first level:
- index: Pointing to the index.html file of the project
- assetGroup: contains the configuration objects for assets of the projects which should be part of the caching which is managed by the Service Worker
The assetGroup consists of two objects: app and assets.
The app asset group contains the static files favicon.ico and index.html. Furthermore the versioned JavaScript and CSS bundle files are included. For those elements the installMode is set to prefetch which means that those file are perfetch and added to the cache at once. This is needed because these items are essential for the application to work offline.
The assets asset group is configuring is containing the configuration for caching all elements in the assets folder of our project. For those items the installMode is set to lazy which means that the those items are added to the cache as they are requested.
Extending The Configuration
The default configuration is caching the assets from a bare-bone Angular project. If you’re extending your application with resources from external locations (e.g. fonts, images, …) or data which is retrieved from an API endpoint you need to further extend the configuration of the Service Worker.
Caching External Resources
For adding external resources which are needed by the app to the caching you need to add urls property to the resources object. The following example shows how you can add a url pattern for caching all Google Fonts used by the application easily:
{ "name": "assets", "installMode": "lazy", "updateMode": "prefetch", "resources": { "files": ["/assets/**"], "urls": [ "**" ] } }
Caching Content From External APIs
If you’d like to cache content retrieved from external APIs you should introduce a new dataGroups section on the same level as assetGroups. In the following code excerpt you can see an example configuration for endpoints /tasks and /users. Those two resources will be cached with a strategy of freshness for a maximum of 20 responses with a maximum age of 1 hours and a timeout of 5 seconds.
"dataGroups": [ { "name": "tasks-users-api", "urls": ["/tasks", "/users"], "cacheConfig": { "strategy": "freshness", "maxSize": 20, "maxAge": "1h", "timeout": "5s" } } ]
By using freshness as the strategy we’re configuring a network-first strategy. You can change that to a cache-first strategy by using value performance instead.
ONLINE COURSE: Angular - The Complete Guide
Check out the great Angular – The Complete Guide with thousands of students already enrolled:
Angular – The Complete Guide
- This course covers Angular 5
- Develop modern, complex, responsive and scalable web applications with Angular
- Use their gained, deep understanding of the Angular fundamentals to quickly establish themselves as frontend developers
- Fully understand the architecture behind an Angular application and how to use it
- Create single-page applications with on of the most modern JavaScript frameworks out there
Progressive Web Apps (PWA) - The Complete Guide
Build a Progressive Web App (PWA) that feels like an iOS & Android App, using Device Camera, Push Notifications and more …
Progressive Web Apps (PWA) – The Complete Guide
- Build web apps that look and feel like native mobile apps for iOS and Android
- Leverage device features like the camera and geolocation in your web apps
- Use service workers to build web apps that work without internet connection (offline-first)
- Use web push notifications to increase user engagement with your web apps
|
https://codingthesmartway.com/angular-5-service-worker/
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
= 99/249 =
Net gain on the sale of all the articles =
Given
Assume
Then the product of the eigen values of B is________
If 'k' is the number of states of the NFA, how many states will the DFA simulating NFA have?
If 'k' is the number of states of the NFA, it has 2k subsets of states. Each subset corresponds to one of the possibilities that the DFA must remember, so the DFA simulating the NFA will have 2k states.
A complete graph is a graph in which each pair of graph vertices is connected by an edge. The chromatic number of a complete graph having 100 vertices is _______.
The chromatic number of a complete graph is the number of vertices it has.
Therefore, the chromatic number of a complete graph having 100 vertices is 100.
A processor has 128 distinct instructions. A 24-bit instruction word has an opcode, register, and operand. The number of bits available for the operand field is 7. The maximum possible value of the general-purpose register is ______.
No. of bits required for 128 instructions = log2 128 = 7
No. of bits required for operand field = 7
No. of bits required for register field = 24 – 7 – 7 = 10
Maximum no. of registers = 210 = 1024
i.e. from 0 to 1023. Hence, maximum value = 1023
The value of
Let
Which of the following expressions is equivalent to A.B+A′.B+A′.B′ ?
The given expression can be solved as:
Consider the following C function.
int strg(char *str)
{
static int temp=0;
if(*str!=NULL)
{
temp++;
strg(++str);
}
else
{
return temp;
}
}
What is the output of strg(abcabcbb)?
The probability of a shooter hitting the target is 1/3 and three shots at the bull’s eye are needed to win the game. What could be the least number of shots for the shooter to give him more than half chance of winning the game?
1 – P (x) ≥ 50%
Suppose a circular queue of capacity n elements is implemented using an array. Circular queue uses REAR and FRONT as array index variables, respectively. Initially, REAR = FRONT = -1. The queue is initially full with 5 elements i.e. 1 2 3 4 5. After that 3 dequeue operations are performed. What is the condition to insert an element in to the above queue?
In a circular queue, the new element is always inserted at the rear position. If queue is not full then, check if (rear == SIZE – 1 && front!= 0) if it is true then set rear=0 and insert new element.
Which of the following is/are correct inorder traversal sequence(s) of binary search tree?
1. 5, 2, 6, 7, 9, 11, 1, 10
2. 10, 15, 16, 23, 38, 56, 89
3. 3, 7, 9, 16, 67, 88, 98
4. 7, 1, 8, 56, 34, 66, 45
The inorder traversal of the binary search tree gives the sequence in ascending order.
Out of the given sequences, only 2 and 3 are in ascending order.
Therefore, option 2 is the correct answer.
Consider the propagation delay along the bus and through the ALU is 35 ns and 120 ns respectively. It takes 18 ns for a register to copy data from the bus. The total time required to transfer data from one register to another is ______ ns
Transfer time = propagation delay + copy time
= 35 + 18
= 53 ns
Consider the following set of statements:
S1: Given a context-free language, there is a Turing machine which will always halt in the finite amount of time and give answer whether language is ambiguous or not.
S2: Given a CFG and input alphabet, whether CFG will generate all possible strings of input alphabet (∑*) is undecidable.
S3: Consider three decision problems P1, P2 and P3. It is known that P1 is decidable and P2 is undecidable. P3 is undecidable if P2 is reducible to P3.
Which of the given statements is true?
Statements S2 and S3 are true.
Statement S1 can be corrected as: Given a context-free language, there is no Turing machine which will always halt in the finite amount of time and give answer whether language is ambiguous or not.
Consider a relation A with n elements. What is the total number of relations which can be formed on A which are irreflexive?
A relation
on a set
is irreflexive provided that no element is related to itself; in other words,
for no
in
.
An irreflexive relation contains (n2 - n) elements.
Therefore, number of irreflexive relations =
In the IPV4 addressing format, the 214 number of networks allowed under ______.
In class B, size of net id = 16 bits
Size of host id = 16 bits
In network id first two bits are reserved for leading bits i.e. 10
Hence total number of networks possible = 216 – 2 = 214
What is the output of following C code?
#include <stdio.h>
#define MUL(x) (x * x)
int main( )
{
int i=4;
int p,q;
p= MUL(i++);
q = MUL(++i);
printf("%d", p + q);
return 0;
}
In C, when preprocessor sees the #define directive, it goes through the entire program in search of the macro templates; wherever it finds one, it replaces the macro template with the appropriate macro expansion. In this program wherever the preprocessor finds the phrase MUL(x) it expands it into the statement (x * x).
p = 4 x 5 = 20
now i = 6
q = 8 x 8 = 64
p + q = 84
Consider a disk pack with 8 surfaces, 128 tracks per surface, 128 sectors per track and 512 bytes per sector. The number of bits required to address the sector is ______.
Number of sectors = Number of surfaces*Number of tracks per surface*Number of sectors per track
= 23 * 27 * 27 = 217
Therefore, number of bits required = 17
Consider the implementation of Dijkstra’s shortest path algorithm on a weighted graph G(V, E) with no negative edge. If this algorithm is implemented to find the shortest path between all pair of nodes, the complexity of the algorithm in a worst-case is ______.
Dijkstra’s algorithm solves the single-source shortest-paths problem on a weighted, directed graph G for the case in which all edge weights are nonnegative. The time complexity of Dijkstra’s algorithm is O(E log V).
One can implement Dijkstra’s shortest path algorithm to find all pair shortest path by running it for every vertex. The time complexity of this Dijkstra’s algorithm is O(V3 log V).
Consider the two cascaded 2-to-1 multiplexers as shown in the figure:
The minimal sum of products form of the output X is
In a TCP connection the size of the available buffer space in the receiver is 8 and senders window size is 2. The size of the congestion window is ________.
A receiver and network can dictate to the sender the size of the sender's window. If the network cannot deliver the data as fast as they are created by the sender, it must tell the sender to slow down. In addition to the receiver, the network is a second entity that determines the size of the sender's window.
Actual window size = minimum (rwnd, cwnd)
rwnd = size of the receiver window
cwnd = size of the congestion window
2 = minimum(8, cwnd)
cwnd = 2
Consider two weighted complete graph G1 and G2 on the vertex set V1, V2, V3,…, V5 such that weight of the edge (Vi, Vj) is min(i,j) for the first graph and max(i,j) for the second graph respectively. The difference between the weight of a minimum spanning tree of G1 and G2 is ______.
A priority queue with n elements is implemented as a max heap. The time complexity to delete the element of the highest priority is ___________.
Since the priority queue is implemented as a max heap, the maximum element will be present at the root node. Deletion of the element will lead to rebuilding the max heap.
Therefore, time complexity = O(log n).
Consider the following C code:
struct emp
{
int empid;
char *name;
char *dept;
};
struct empDetails
{
int age;
char *city;
char *state;
struct emp employee;
};
int main()
{
Struct empDetails details;
…
}
Which of the following syntax is the correct way to display employee name?
A structure contains a number of data types grouped together. These data types may or may not be of the same type. Structure use a dot (.) operator to access its elements. One structure can be nested within another structure. The method used to access the element of a structure that is part of another structure. For this the dot operator is used twice.
Consider a system with byte-addressable memory, 40-bit logical address. What is the page size in MB if each page table entry is of 8 bytes each and size of page table is 8 MB?
Page table size = number of page table entries x entry size
8 x 220= 240/k x 8
K = 240/220
K = 220= 1 MB
Study the following ER diagram carefully:
How many total tables are required to store the data?
Each strong and weak entity needs a separate table. Therefore, entities Artist, Album, Track and Played require 1 table each.
If the relationship is 1:n, we do not need any separate table to store it.
Therefore, total number of tables required = 4
Say that string x is a prefix of string y if a string z exists where xz = y and that x is a proper prefix of y if in addition x ≠ y. Suppose an operation is defined on a regular language A. In which of the following options, the class of the regular language is closed?
The language A is closed under NOPREFIX(A) = {ω Є A | no proper prefix of ω is a member of A}.
Let M = {Q, Ʃ, δ, q0, F} be an NFA recognizing A, where A is some regular language. Construct M' = {Q', Ʃ, δ', q0', F'} recognizing NOPREFIX(A) as:
Q' = Q
For r Є Q' and a Є Ʃ, define δ'(r, a)
δ'(r, a) = δ(r, a) if r ∉∉ F.
δ'(r, a) = ∅∅ if r Є F.
q0' = q0
F' = F
Sum of eigen value = trace of matrix
A PC-relative mode branch instruction is 8 bytes long. The address of the instruction, in decimal, is 548321. Find the branch target address if the signed displacement in the instruction is –29.
PC – relative mode branch instruction uses content of the program counter. Program counter points to the next instruction i.e. 548321 + 8 = 548329
Branch target address = 548329 – 29 = 548300.
TCP opens a connection using an initial sequence number of 3500 and sends data at 5 MBps. The other party opens the connection with a sequence number of 1200. Wrap around time for both the sequence number differs by 12562.77 sec. Calculate the data rate(in KB) for the second party.
Wrap around time for the first sequence number:
X = 320000 Bytes per sec
X = 320 KB
A gambler has 4 coins in his pocket. Two are double-headed, one is double-tailed, and one is normal. The coins cannot be distinguished unless one looks at them. The gambler takes a coin at random, opens her eyes and sees that the upper face of the coin is a head. What is the probability that the lower face is a head?
Sample Space: {HH,HH,TT,TH/HT}
There are 5 faces that are heads out of a total 8, so probability is 5/8. Let A be the event that the upper face is a head, and B be the event that the lower face is head.
Pr[A] = Pr[B] = 5/8.
Pr[A∩B] = 2/4 = ½.
Hence option b is correct.
Consider the minterm list form of a Boolean function F given below:
F(P, Q, R, S) = Ʃm(0, 1, 2, 5, 7,9, 10) + d(3, 8, 11, 14)
Here, m denotes a minterm and d denotes a don't care term. The number of essential prime implicants of the function F is ___________.
Consider the main memory with four page frames and the following sequence of page references:
11 3 5 9 6 5 3 6 5 11 8 9
Which one of the following page replacement policy experiences same no. of page hit?
I. FIFO
II. LRU
III. Optimal page replacement
IV. LIFO
Consider a DFS is implemented on an undirected weighted graph G. Let d(r,u) and d(r,v) be the weight of the edge (r,u) and edge (r, v) respectively. If v is visited immediately after u in depth first traversal, which of the following statement is correct?
Depth-first search explores edges out of the most recently discovered vertex u that still has unexplored edges leaving it. Once all of u’s edges have been explored, the search “backtracks” to explore edges leaving the vertex from which u was discovered.
Which of the following is equivalent to
he correct answer is option 3
An operating system uses the banker's algorithm for deadlock avoidance to manage the allocation of four resources A, B, C, and D. The table given below represents the current system state.
There are 3 units of type B, 2 units of type D still available. The system is currently in the safe state. Which of the following sequence is a safe sequence?
Available = {0, 3, 0, 2}
Which of the following statements is false?
Option 3 is false.
If L1 and If L2 are two context free languages, their intersection L1 ∩ L2 need not be context free.
For example, L1 = { anbncm | n >= 0 and m >= 0 } and L2 = (ambncn | n >= 0 and m >= 0 }
L3 = L1 ∩ L2 = { anbncn | n >= 0 } need not be context free.
L1 says number of a’s should be equal to number of b’s and L2 says number of b’s should be equal to number of c’s. Their intersection says both conditions need to be true, but push down automata can compare only two. So it cannot be accepted by pushdown automata, hence not context free.
Consider the following proposed solution for the two – process synchronization.
Code for P0:
do
{flag[0] = true;
turn = 1;
while(flag[1] && turn == 1);
(critical section)
flag[0] = false;
(remainder section)
}
while(true);
Code for P1:
do
{flag[1] = true;
turn = 0;
while(flag[0] && turn == 0);
(critical section)
flag[1] = false;
(remainder section)
}
while(true);
Above solution requires two shared data items: turn and flag[]
The variable turn indicates whose turn it is to enter its critical section. The flag array is used to indicate if a process is ready to enter its critical section.
Which of the following statement is TRUE?
P0 and P1 could not have successfully executed their while statements at about the same time, since the value of turn can be either 0 or 1 but cannot be both. Hence, one of the processes say, P1 must have successfully executed the while statement, whereas P0 had to execute at least one additional statement. Hence mutual exclusion is preserved.
Once P1 exits its critical section, it will reset flag[1] to false, allowing P0 to enter its critical section. If P1 resets flag[1] to true, it must also set turn to 0. Thus, since P0 does not change the value of the variable turn while executing the while statement, P0 will enter the critical section (progress) after at most one entry by P1 (bounded waiting).
A bipartite graph is a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent. What is the maximum number of edges in a bipartite graph having 6 vertices?
There will be maximum edges if we have 3 (n/2) vertices in each set.
Therefore, maximum number of edges = (n/2)*(n/2) = n2/4 = 9
What is the output of the following C code?
int main( )
{
auto int i=6;
{
auto int i=7;
{
auto int i=8;
printf ( "\n%d ", i ) ;
}
printf ( "%d ", i ) ;
}
printf ( "%d", i ) ;
}
The Compiler treats the three i’s as totally different variables, since they are defined in different blocks. Once the control comes out of the innermost block the variable i with value 6 is lost, and hence the i in the second printf( ) refers to i with value 7. Similarly, when the control comes out of the next innermost block, the third printf( ) refers to the i with value 8.
The characters ‘a’ to ‘e’ have the following frequencies. A Huffman code is used to represent the message. A message is made up of characters given below. What is the corresponding Huffman code for message ‘ace’?
Consider the following set of statements:
S1: In a depth-first traversal of a graph G with n vertices, k edges are marked as tree edges. There are n-k connected components in G.
S2: A depth-first search necessarily finds the shortest path between the starting point and any other reachable node.
S3: The depth-first tree on the simple undirected graph never contains a cross edge.
Which of the given statements is false?
Statement S2 is false. It can be corrected as:
A BFS will find the shortest path between the starting point and any other reachable node. A depth-first search will not necessarily find the shortest path.
In IP packet has arrived with datagram of size 700 bytes. The size of the IP header is 20 bytes. This packet will be forwarded on the link whose MTU is 185 bytes. The number of fragments that the IP datagram will be divided is ________.
Maximum transmission unit = 185 bytes
Size of the IP header = 20 bytes
Number of fragments =
What is the generating function for the different ways in which eight identical cookies can be distributed among three distinct children if each child receives at least two cookies and no more than four cookies?
Because each child receives at least two but no more than four cookies, for each child there is a factor equal to (x2 + x3 + x4) in the generating function for the sequence.
Because there are three children, this generating function is (x2 + x3 + x4)3
The total number of number – tokens and literal – tokens in the following C code is ______.
int main()
{
float r, area;
printf("\nEnter the radius of Circle : ");
scanf("%d", &r);
area = 3.14 * r * r;
printf("\nArea of Circle : %f", area);
return 0;
}
Which of the following problems is undecidable?
Type-0 grammars include all formal grammars. Type 0 grammar language are recognized by turing machine. These languages are also known as the recursively enumerable languages.
The membership problem for Type-0 languages is undecidable.
A system has 3 resources and 5 processes competing for them. Each process can request a maximum of N instances. The largest value of N that will always avoid deadlock is _______.
Distribute each process to one less than maximum request resources i.e. 5(N - 1)
Total no. of resources = 5(N - 1) + 1 = 5N - 2
If total no. resources are greater than given resources then a deadlock will occur.
5N – 2 <= 3
5N <= 5
N <= 1
The number of tables required to convert the relational schema R(A, B, C, D, E, F, G, H) into 3NF with following functional dependencies is _____.
A → DG
AB → E
D → C
E → F
G → H
(AB)+ = {ABDGECFH}
Therefore, AB is the candidate key.
The given relation is in 1NF.
Converting in 2NF
R1(A, B, E, F) and R2(A, D, G, C, H)
Converting in 3NF
R11(A, B, E), R12(E, F)
R21(A, D, G), R22(D, C) and R23(G, H)
Hence, the number of tables is 5.
There are 6 stations in a slotted LAN. Probbability of each station transmits during a contention slot is 0.8. What is the probability of only one station transmits in a given time slot? (Compute the value upto 4 decimal places)
The probability of only one station =
Consider the following relation:
Student(courseid, secid, semester, year, grade)
What is the expression for finding all the courses taught in the Fall 2009 semester but not in the Spring 2010 semester?
The correct answer is option 3 i.e.
Consider the following parse tree for the expression 2^8 - 4 - 1 ^2
The value of the given expression evaluated using the above parse tree is 512. Operator ^ is used to compute the power of a given number. What are the precedence order and associativity of the operator ^ and –?
(2^(((8 – 4) – 1) ^2))
(2^((3^2)))
(2^9)
512
A cache memory unit with a capacity of 256 KB is implemented as a 4-way set-associative cache. What is the memory size (in MB) if number of tag bits is 6?
Cache size = 256 KB = 218 B
Number of tag bits = 6
Memory is represented as:
Therefore, number of sets = 2x
Number of blocks = 2y
Cache size = Number of sets * Number of Lines per set * Block size
218 = 2x * 4 * 2y
2x+y = 216
x+y = 16 bits
Therefore, width of physical address = 6 + 16 = 22 bits
Memory size = 222 = 4 MB
Let M1, M2, and M3 be three matrices of dimensions 12 x 9, 9 x 15, 15 x 10 respectively. The minimum number of scalar multiplications required to find the product M1 M2 M3 using the basic matrix multiplication method is ______.
(M1 M2) M3:
12 x 9 x 9 x 15 -> resultant matrix: 12 x 15 -> no. of multiplications: 12 x 9 x 15 = 1620
12 x 15 x 15 x 10 -> resultant matrix: 12 x 10 -> no. of multiplications: 12 x 15 x 10 = 1800
Total no. of multiplication: 1620 + 1800 = 3420
M1 (M2 M3):
9 x 15 x 15 x 10 -> resultant matrix: 9 x 10 -> no. of multiplications: 9 x 15 x 10 = 1350
12 x 9 x 9 x 10 -> resultant matrix: 12 x 10 -> no. of multiplications: 12 x 9 x 10 = 1080
Total no. of multiplication: 1350 + 1080 = 2430
What is the output of following C code?
#include<stdio.h>
int main( )
{
char s[ ] = "C programming and Data structures" ;
printf ( "\n%s", &s[2] ) ;
printf ( "\n%s", s ) ;
printf ( "\n%s", &s ) ;
printf ( "\n%c", s[2] ) ;
return 0;
}
A processor provides an instruction which transfers 64 bytes of data from one register to another register. Instruction fetch (IF) and Instruction decode (ID) takes 20 clock cycle. Then it takes 30 clock cycles to transfer each byte. The processor is clocked at a rate of 12 GHz. What is the delay in acknowledging an interrupt if the instruction is non-interruptible? (Compute value rounding to two decimal places.)
Length of clock cycle = 1/12
Length of instruction cycle = (20 + (30 x 64)) x 1/12 = 161.67 ns
Worst case delay = length of instruction cycle = 161.67 ns
Consider the following relations:
Student(ID, course, sec, semester, year)
Teacher(TID, course, sec, semester, year, salary)
What is the query to find the total number of (distinct) students who have taken courses taught by the instructor with ID 100?
The correct answer is option 3 i.e.
select count (distinct ID)
from Student
where (course, sec, semester, year) in (select course, sec, semester, year
from Teacher where Teacher.TID= 100);
The IN operator is used when you want to compare a column with more than one value. It is similar to an OR condition.
Consider the slow start phase for a congestion control in a TCP connection. Initially, the window size is 4 MSS and the threshold is 36 MSS. At which transmission window size reached the threshold limit?
Start ⇒⇒4 MSS
Window size after first transmission = 8 MSS
Window size after second transmission = 16 MSS
Window size after third transmission = 32 MSS
Window size after fourth transmission = 36 MSS
Video | 94:34 min
Doc | 28 Pages
Doc | 1 Page
Doc | 1 Page
Test | 65 questions | 180 min
Test | 65 questions | 180 min
Test | 65 questions | 180 min
Test | 65 questions | 180 min
Test | 65 questions | 180 min
|
https://edurev.in/course/quiz/attempt/137_Computer-Science-And-IT--CSIT--Mock-Test-4-For-Gat/5b992860-d3c4-4f41-9cd1-04c314ce9491
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
netdata:系统实时性能监测netdata 是一个高度优化的Linux守护进程 提供对Linux系统、应用程序,SNMP设备进行实时性能监控通过Web!
netdatabot released this
Release v1.22.0.
This release also contains 1 new collector, 1 new exporting connector, 1 new alarm notification method, 27 improvements, 16 documentation updates, and 22 bug fixes.
At a glance.
Acknowledgments
- amishmm for updating
netdata.confand
netdata.service.v235.in.
- adamwolf for fixing a typo in
netdata-installer.sh.
- lassebm for fixing a crash when shutting down an Agent with the ACLK disabled.
- yasharne for adding a new whoisquery collector and for adding health alarm templates for both the whoisquery and x509check collectors.
- illumine for adding Dynatrace as a new alarm notification method.
- slavaGanzin, carehart, Jiab77, and IceCodeNew for documentation fixes and improvements.
Breaking changes
- The previous iteration of Netdata Cloud, accessible through various Sign in and Nodes view (beta) buttons on the Agent dashboard, is deprecated in favor of the new Cloud experience.
- Our old documentation site (
docs.netdata.cloud) was replaced with Netdata Learn. All existing backlinks redirect to the new site.
- Our localization project is no longer actively maintained. We're grateful for the hard work of its contributors.
Improvements
Netdata Cloud
- Enabled support for Netdata Cloud. (#8478), (#8836), (#8843), (#8838), (#8840), (#8850), (#8853), (#8866), (#8871), (#8858), (#8870), (#8904), (#8895), (#8927), (#8944) by amoss, jacekkolasa, Ferroin, prologic, mfundul, underhood, and stelfrag.
- Added TTL headers to ACLK responses. (#8760) by amoss
- Improved the thread exit fixes in #8750. (#8750) by amoss
- Added support for building libmosquitto on FreeBSD/macOS. (#8254) by Ferroin
- Improved ACLK reconnection sequence. (#8729) by stelfrag
- Improved ACLK memory management and shutdown sequence. (#8611) by stelfrag
- Added
session-idto ACLK using connect timestamp. (#8633) by amoss
Collectors
- Improved the index size for the eBPF collector. (#8743) by thiagoftsm
- Added health alarm templates for the whoisquery collector. (#8700) by yasharne
- Added a whoisquery collector. go.d.plugin/#368 by yasharne
- Removed an automatic restart of
apps.plugin. (#8592) by vlvkobal
Packaging/installation
- Added missing
NETDATA_STOP_CMDin
netdata-installer.sh. (#8897) by prologic
- Added JSON-C dependency handling to installation and packaging. (#8776) by Ferroin
- Added a check to wait for a recently-published tag to appear in Docker Hub before publishing new images. (#8713) by knatsakis
- Removed obsolete scripts from Docker images. (#8704) by knatsakis
- Removed obsolete DEVEL support from Docker images. (#8702) by knatsakis
- Improved how we publish Docker images by pushing synchronously. (#8701) by knatsakis
Exporting
- Enabled internal statistics for the exporting engine in the Agent dashboard. (#8635) by vlvkobal
- Implemented a Prometheus exporter web API endpoint. (#8540) by vlvkobal
Notifications
- Added a certificate revocation alarm for the x509check collector. (#8684) by yasharne
- Added the ability to send Agent alarm notifications to Dynatrace. (#8476) by illumine
CI/CD
- Disabled
document-startyamllint check. (#8522) by ilyam8
- Simplified Docker build/publish scripts to support only a single architecture. (#8747) by knatsakis
- Added Fedora 32 to build checks. (#8417) by Ferroin
- Added libffi to ArchLinux CI tests as a workaround for an upstream bug. (#8476) by Ferroin
Other
- Updated main copyright and links for the year 2020 in daemon help output. (#8937) by zack-shoylev
- Moved
bind toto
[web]section and update
netdata.service.v235.into sync it with recent changes. (#8454) by amishmm
- Put old dashboard behind a prefix instead of using a script to switch. (#8754) by Ferroin
- Enabled the truthy rule in yamllint. (#8698) by ilyam8
- Added Borg backup, Squeezebox servers, Hiawatha web server, and Microsoft SQL to apps.plugin so that it can appropriately group them by type of service. (#8646), (#8655), (#8656), and (#8659) by vlvkobal
Documentation
- Add custom label to collectors frontmatter to fix sidebar titles in generated docs site at
learn.netdata.cloud. (#8936) by joelhans
- Added instructions to persist metrics and restart policy in Docker installations. (#8813) by joelhans
- Fixed modifier in Nginx guide to ensure correct paths and filenames. (#8880) by slavaGanzin
- Added documentation for working around Clang build errors. (#8867) by Ferroin
- Fixed typo in Docker installation instructions. (#8861) by carehart
- Added Docker instructions to claiming docs. (#8755) by joelhans
- Capitalized title in streaming doc. (#8712) by zack-shoylev
- Updated pfSense doc and added warning for apcupsd users. (#8686) by cryptoluks
- Improved offline installation instructions to point to correct installation scripts and clarify process. (#8680) by IceCodeNew
- Added missing path to the process of editing
charts.d.conf. (#8740) by Jiab77
- Added combined claiming and ACLK documentation. (#8724) by joelhans
- Standardized how we link between various Agent-specific documentation. (#8638) by joelhans
- Pinned
mkdocs-materialto re-enable Netlify builds of documentation site. (#8639) by joelhans
- Updated main
README.mdwith v1.21 release news. (#8619) by joelhans
- Changed references of MacOS to macOS. (#8562) by joelhans
Bug fixes
- Fixed kickstart error by removing old
cronsymlink. (#8849) by prologic
- Fixed bundling of old dashboard in binary packages. (#8844) by Ferroin
- Fixed typo in
netdata-installer.sh. (#8811) by adamwolf
- Fixed failure output during installations by removing old function call. (#8824) by Ferroin
- Fixed
bundle-dashboard.shscript to prevent broken package builds. (#8823) by prologic
- Fixed mdstat
failed devicesalarm. (#8752) by ilyam8
- Fixed rare race condition in old Cloud iframe. (#8786) by jacekkolasa
- Removed
no-clear-notificationoptions from portcheck health templates. (#8748) by ilyam8
- Fixed issue in
system-info.shregarding the parsing of
lscpuoutput. (#8754) by Ferroin
- Fixed old URLs to silence Netlify's mixed content warnings. (#8759) by knatsakis
- Fixed master streaming fatal exits. (#8780) by thiagoftsm
- Fixed email authentiation to Cloud/Nodes View. (#8757) by jacekkolasa
- Fixed non-escaped characters in private registry URLs. (#8757) by jacekkolasa
- Fixed crash when shutting down an Agent with the ACLK disabled. (#8725) by lassebm
- Fixed Docker-based builder image. (#8718) by ilyam8
- Fixed status checks for UPS devices using the apcupsd collector. (#8688) by ilyam8
- Fixed the build matrix in the build and install GitHub Actions checks. (#8715) by Ferroin
- Fixed eBPF collector compatibility with the 7.x family of RedHat. (#8694) by thiagoftsm
- Fixed alarm notification script by adding a check to the Dynatrace notification method. (#8654) by ilyam8
- Fixed
threads_creation_ratechart context in the python.d MySQL collector. (#8636) by ilyam8
- Fixed errors shown when running
install-requred-packages.shon certain Linux systems. (#8606) by ilyam8
- Fixed
sudocheck in charts.d libreswan collector to prevent daily security notices. (#8569) by ilyam8
Assets
5
netdatabot released this
Netdata v1.21.1
Release v1.21.1 is a hotfix release to improve the performance of the new React dashboard, which was merged and enabled by default in v1.21.0.
The React dashboard shipped in v1.21.0 did not properly freeze charts that were outside of the browser's viewport. If a user who loaded many charts by scrolling through the dashboard, charts outside of their browser's viewport continued updating. This excess of chart updates caused all charts to update more slowly than every second.
v.1.21.1 includes improvements to the way the Netdata dashboard freezes, maintains state, and restores charts as users scroll.
Assets
5
netdatabot released this
Netdata v1.21.0
Release v1.21.0 contains 2 new collectors, 3 new exporting connectors, 37 bug fixes, 46 improvements, and 25 documentation updates. We also made 26 bug fixes or improvements related to the upcoming release of Netdata Cloud.
At a glance
We added a new collector for Apache Pulsar, a popular open-source distributed pub-sub messaging system. We use Pulsar in our Netdata Cloud infrastructure (more on that later this month!), and are excited to start sharing metrics about our own Pulsar systems when the time comes. The Pulsar collector attempts to auto-detect any running Pulsar processes, but you can always configure the collector based on your setup.
Also new in v1.21 is a VerneMQ collector. We use the open-source MQ Telemetry Transport (MQTT) broker for Netdata Cloud as well. As with Pulsar, you can configure the VerneMQ collector to auto-detect your installation in just a few steps.
Our experimental exporting engine received significant updates with new connectors for Prometheus remote write, MongoDB, and AWS Kinesis Data Streams. You can now send Netdata metrics to more than 20 additional external storage providers for long-term archiving and deeper analysis. Learn more about the exporting engine in our documentation.
We upgraded our TLS compatibility to include 1.3, which applies to HTTPS for both Netdata's web server and streaming connections. TLS 1.3 is the most up-to-date version of the TLS protocol, and contains important fixes and improvements to ensure strong encryption. If you enabled TLS in the web server or streaming, Netdata attempts to use 1.3 by default, but you can also set the version and ciphers explicitly. Learn more in the documentation.
The Netdata dashboard has been completely re-written in React. While the look and behavior hasn't changed, these under-the-hood changes enable a suite of new features, UX improvements, and design overhauls. With React, we'll be able to work faster and better resource our talented engineers..
Acknowledgments
- Jiab77 for helping remove extra printed
\nin various installation methods.
- SamK for fixing missing folders in
/var/for .deb installations.
- kevenwyld for improving Netdata's support of RHEL distributions.
- WoozyMasta for adding in the ability to get Kubernetes pod names with
kubectlin bare-metal deployments.
- paulmezz for adding the ability to to connect to non-admin user IDs when trying to collect metrics from a Ceph storage cluster.
- ManuelPombo for adding additional charts to our Postgres collector, and anayrat for helping review the changes.
- Default for adding lsyncd to the backup group in
apps.plugin.
- bceylan, peroxy, toadjaune, grinapo, m-rey, and YorikSar for documentation fixes.
Breaking changes
None.
Improvements
- Extended TLS support for 1.3. (#8505) by thiagoftsm
- Switched to the React dashboard code as the default dashboard. (#8363) by Ferroin
Collectors
- Added a new Pulsar collector. (#8364) by ilyam8
- Added a new VerneMQ collector. (#8236) by ilyam8
- Added high precision timer support for plugins such as
idlejitter. (#8441) by mfundul
- Added an alarm to the
dns_querycollector that detects DNS query failure. (#8434) by ilyam8
- Added the ability to get the pod name from cgroup with
kubectlin bare-metal deployments. (#7416) by WoozyMasta
- Added the ability to connect to non-admin user IDs for a Ceph storage cluster. (#8276) by paulmezz
- Added connections (backend) usage to Postgres monitoring. (#8126) by ManuelPombo
- eBPF: Added support for additional Linux kernels found in Debian 10.2 and Ubuntu 18.04. (#8192) by thiagoftsm
Packaging/installation
- Added missing override for Ubuntu Eoan. (#8547) by prologic
- Added Docker build arguments to pass extra options to Netdata installer. (#8472) by Ferroin
- Added deferred error message handling to the installer. (#8381) by Ferroin
- Fixed cosmetic error checking for CentOS 8 version in
install-required-packages.sh. (#8339) by prologic
- Added various fixes and improvements to the installers. (#8315) by Ferroin
- Migrated to installing only Python 3 packages during installation. (#8318) by Ferroin
- Improved support for RHEL by not installing the CUPS plugin when v1.7 of CUPS cannot be installed. (#7216) by kevenwyld
- Added support for Clear Linux in
install-required-packages.sh. (#8154) by Ferroin
- Removed Fedora 29 from CI and packaging. (#8100) by Ferroin
- Removed Ubuntu 19.04 from CI and packaging. (#8040) by Ferroin
- Removed OpenSUSE Leap 15.0 from CI. (#7990) by Ferroin
Exporting
- Added a MongoDB connector to the exporting engine. (#8416) by vlvkobal
- Added a Prometheus Remote Write connector to the exporting engine. (#8292) by vlvkobal
- Added an AWS Kinesis connector to the exporting engine. (#8145) by vlvkobal
Documentation
- Fixed typo in main
README.md. (#8547) by bceylan
- Updated the update instructions with per-method details. (#8394) by joelhans
- Updated paragraph on
install-required-packages.sh. (#8347) by prologic
- Added Patti's dashboard video to the documentation. (#8385) by joelhans
- Fixed go.d modules in the
COLLECTORS.md. (#8380) by ilyam8
- Added frontmatter to all documentation in bulk. (#8354) and (#8372) by joelhans
- Fixed MDX parsing in installation guide. (#8362) by joelhans
- Fixed typo in eBPF documentation. (#8360) by ilyam8
- Fixed links in packaging/installer to work on GitHub and docs. (#8319) by joelhans
- Fixed typo in main
README.md. (#8335) by peroxy
- Removed mention saying that .deb packages are experimental. (#8250) by toadjaune
- Added standards for abbreviations/acronyms to docs style guide. (#8313) by joelhans
- Tweaked eBPF documentation, and added performance data. (#8261) by joelhans
- Added requirements for the exim collector. (#8096) by petarkozic
- Fixed misspelling of openSUSE and SUSE. (#8233) by m-rey
- Added OpenGraph tags to documentation pages. (#8224) by joelhans
- Fixed typo in custom dashboard documentation. (#8213) by shortpatti
- Removed extra asterisks in main README. (#8193) by grinapo
- Added eBPF README to documentation navigation and improved page title. (#8191) by joelhans
- Fixed figure+image without closing tag in new documentation. (#8177) by joelhans
- Corrected instructions for running Netdata behind Apache. (#8169) by cakrit
- Added PR title guidelines to the contribution guidelines to make
CHANGELOG.mdmore meaningful. (#8150) by cakrit
- Fixed formatting in Custom dashboards documentation. (#8102) by YorikSar
- Updated the manual install documentation with better information about CentOS 6. (#8088) by Ferroin
- Added tutorials to support v1.20 release (#7943) by joelhans
CI/CD
- Added logic to bail early on LWS build if cmake is not present. (#8559) by Ferroin
- Added
python.dconfiguration files to YAML linting CI process and increase line limit to 120 characters. (#8541) and (#8542) by ilyam8
- Cleaned up GitHub Actions workflows. (#8383) by Ferroin
- Migrated tests from Travis CI to Github Workflows. (#8331) by prologic
- Covered
install-required-packages.shwith Coverity scan. (#8388) by prologic
- Added support for cross-host docker-compose builds. (#7754) by amoss
- Reconfigured Travis CI to retry transient failures on lifecycle tests. (#8203) by prologic
- Switched to checkout@v2 in GitHub Actions. (#8170) by ilyam8
Other
Netdata Cloud
- Fixed compiler warnings in the claiming code. (#8567) by vlvkobal
- Fixed regressions in cloud functionality (build, CI, claiming). (#8568) by underhood
- Switched over to soft feature flag. (#8545) by amoss
- Improved claiming behavior to run as
netdatauser by default, or override if necessary. (#8516) by amoss
- Updated the
infoendpoint for Cloud notifications. (#8519) by amoss
- Added correct error logging for ACLK challenge/response. (#8538) by stelfrag
- Cleaned up Cloud configuration files to move
[agent_cloud_link]settings to
[cloud]. (#8501) by underhood
- Enhanced ACLK header payload to include
timestamp-offset-usec. (#8499) by stelfrag
- Added ACLK build failures to anonymous statistics. (#8429) by underhood
- Added ACLK connection failures to anonymous statistics. (#8456) by underhood
- Added HTTP proxy support to ACLK. (#8406)/(#8418) by underhood
- Improved ownership of the
claim.ddirectory. (#8475) by amoss
- Fixed the ACLK response payload to match the new specification. (#8420) by stelfrag
- Added the new cloud info in the info endpoint. (#8430) by amoss
- Implemented ACLK Last Will and Testament. (#8410) by stelfrag
- Fixed JSON parsing in ACLK. (#8426) by stelfrag
- Fixed outstanding problems in claiming and add SOCKS5 support. (#8406)/(#8404) by amoss and underhood
- Fixed the type value for alarm updates in the ACLK. (#8403) by stelfrag
- Improved performance of ACLK. (#8399)/(#8401) by amoss
- Improved the ACLK's agent "pop-corning" phase. (#8398) by stelfrag
- Improved ACLK according to results of the smoke-test. (#8358) by amoss and underhood
- Added code to bundle LWS in binary packages. (#8255) by Ferroin
- Added libwebsockets files to
make dist. (#8275) by Ferroin
- Adapted the claiming script to new API responses. (#8245) by hmoragrega
- Fixed claiming script to reflect Netdata Cloud API changes. (#8220) by cosmix
- Added libwebsockets bundling code to
netdata-installer.sh. (#8144) by Ferroin
Bug fixes
- Removed notifications from the dashboard and fixed the
/default.htmlroute. (#8599 by jacekkolasa
- Fixed
help-tooltipsstyling, private registry node deletion, and the right-hand sidebar "jumping" on document clicks. (#8553 by jacekkolasa
- Fixed errors reported by Coverity. (#8593) by thiagoftsm, (#8579) by amoss, and (#8586) by thiagoftsm
- Added
netdata.service.*to
.gitignoreto hide
system/netdata.service.v235file. (#8556) by vlvkobal
- Fixed Debian 8 (Jessie) support. (#8590) and (#8593) by prologic
- Fixed broken Fedora 30/31 RPM builds. (#8572) by prologic
- Fixed broken pipe ignoring in
apps.plugin. (#8554) by vlvkobal
- Fixed the
bytespersecchart context in the Python Apache collector. (#8550) by ilyam8
- Fixed
charts.d.pluginto exit properly during Netdata service restart. (#8529) by ilyam8
- Fixed minimist dependency vulnerability. (#8537) by jacekkolasa
- Fixed our Debian/Ubuntu packages to package the expected systemd unit files. (#8468) by prologic
- Fixed auto-updates for static (
kickstart-static64.sh) installs. (#8507) by prologic
- Fixed openSUSE 15.1 RPM package builds. (#8494) by prologic
- Fixed how SimpleService truncates Python module names. (#8492) by ilyam8
- Removed erroneous
\nin uninstaller output. (#8446) by prologic
- Fixed
install-required-packagesscript to self-update
apt. (#8491) by prologic
- Added proper prefix to Python module names during loading. (#8474) by ilyam8
- Fixed how the Netdata updater script cleans up after being run. (#8414) by prologic
- Fixed the flushing error threshold with the database engine. (#8425) by mfundul
- Fixed memory leak for host labels streaming from slaves to master. (#8460) by thiagoftsm
- Fixed support for uninstalling the eBPF collector in the uninstaller. (#8444) by prologic
- Fixed a bug involving
stop_all_netdata uv_pipe_connect()in the installer. (#8444) by prologic
- Fixed installer output regarding newlines. (#8447) by prologic
- Fixed broken dependencies for Ubuntu 19.10. (#8397) by prologic
- Fixed streaming scaling. (#8375) by mfundul
- Fixed missing characters in kernel version field by encoding slave fields. (#8216) by thiagoftsm
- Fixed installation for Ubuntu 14.04 (#7690) by Ehekatl
- Fixed dependencies for Debian Jessie. (#8290) by Ferroin
- Fixed dependency names for Arch Linux. (#8334) by Ferroin
- Removed extra printed
\nin various installers. (#8324)/(#8325)/(#8326) by Jiab77
- Fixed missing folders in
/var/for .deb packages. (#8314) by SamK
- Fixed Ceph collector to get
osd_perf_infosin versions 14.2 and higher. (#8248) by ilyam8
- Fixed RHEL / CentOS 8.x dependencies for Judy-devel and others.(#8202) by prologic
- Removed extraneous commas from chart information in dashboard. (#8266) by FlyingSixtySix
- Removed
tmemcollection from xenstat_plugin to allow Netdata on Xen 4.13 to compile successfully. (#7951) by rushikeshjadhav
- Fixed
get_latest_versionfor nightly channel update script. (#8172) by ilyam8
- Restricted messages to Google Analytics. (#8161) by thiagoftsm
- Fixed Python 3 dict access in OpenLDAP collector module. (#8162) by Mic92
Assets
5
netdatabot released this
Netdata v1.19.0
Release v1.19.0 contains 2 new collectors, 19 bug fixes, 17 improvements, and 19 documentation updates.
At a glance
We completed a major rewrite of our web log collector to dramatically improve its flexibility and performance. The new collector, written entirely in Go, can parse and chart logs from Nginx and Apache servers, and combines numerous improvements. Netdata now supports the LTSV log format, creates charts for TLS and cipher usage, and is amazingly fast. In a test using SSD storage, the collector parsed the logs for 200,000 requests in about 200ms, using 30% of a single core.
This Go-based collector also has powerful custom log parsing capabilities, which means we're one step closer to a generic application log parser for Netdata. We're continuing to work on this parser to support more application log formatting in the future.
We have a new tutorial on enabling the Go web log collector and using it with Nginx and/or Apache access logs with minimal configuration. Thanks to Wing924 for starting the Go rewrite!
We introduced more cmocka unit testing to Netdata. In this release, we're testing how Netdata's internal web server processes HTTP requests—the first step to improve the quality of code throughout, reduce bugs, and make refactoring easier. We wanted to validate the web server's behavior but needed to build a layer of parametric testing on top of the CMocka test runner. Read all about our process of testing and selecting cmocka on our blog post: Building an agile team's 'safety harness' with cmocka and FOSS.
Netdata's Unbound collector was also completely rewritten in Go to improve how it collects and displays metrics. This new version can get dozens of metrics, including details on queries, cache, uptime, and even show per-thread metrics. See our tutorial on enabling the new collector via Netdata's amazing auto-detection feature.
We fixed an error where invalid spikes appeared on certain charts by improving the incremental counter reset/wraparound detection algorithm.
Netdata can now send health alarm notifications to IRC channels thanks to Strykar!
And, Netdata can now monitor AM2320 sensors, thanks to hard work from Tom Buck.
Acknowledgements
Our thanks go to:
- andyundso for fixing the packagecloud binary installation in Debian 8.
- Strykar for adding support IRC health notifications.
- tommybuck for the new AM2320 sensors collector.
- Saruspete for the new ability to provide metrics on fragmentation of free memory pages.
- OdysLam for improving the documentation for new collector plugins.
- k0ste, xginn8 and nodiscc for improving the configuration of the apps plugin.
- amichelic for improving the web_log collector.
- cherouvim, arkamar, half-duplex and CtrlAltDel64 for improving the documentation.
- mniestroj for the fix to the dbengine compilation with musl standard C.
- arkamar for an improvement to the xenstat collector.
- vakartel for improving the cgroup network interfaces detection in Proxmox 6.
Improvements
New Collectors
- AM2320 sensor collector plugin #7024 (tommybuck)
- Added parsing of /proc/pagetypeinfo to provide metrics on fragmentation of free memory pages. #6843 (Saruspete)
- The unbound collector module was completely rewritten, in Go go.d.plugin/#287 (ilyam8)
Collector improvements
- We rewrote our web log parser in Go, drastically improving its flexibility and performance. go.d.plugin/#141 (ilyam8)
- The Kubernetes kubelet collector now reads the service account token and uses it for authorization. We also added a new default job to collect metrics from. go.d.plugin/#285
- Added a new default job to the Kubernetes coredns collector to collect metrics from. go.d.plugin/#285
- apps.plugin: Synced FRRouting daemons configuration with the frr 7.2 release. #7333 (k0ste)
- apps.plugin: Added process group for git-related processes. #7289 (nodiscc)
-apps.plugin: Added balena to the container-engines application group. #7287 (xginn8)
- web_log: Treat 401 Unauthorized requests as successful. #7256 (amichelic)
- xenstat.plugin: Prepare for xen 4.13 by checking for
check xenstat_vbd_errorpresence. #7103 (arkamar)
- mysql: Added galera
cluster_statusalarm. #6989 (ilyam8)
Metrics Database
Health
- Fine tune various default alarm configurations. #7322 (Ferroin)
- Update SYN cookie alarm to be less aggressive. #7250 (Ferroin)
- Added support for IRC alarm notifications #7148 (Strykar)
Installation/Packages
- Corrected the Makefile.am files indentation, to prevent unexpected errors. #7252 (knatsakis)
- Rationalized ownership and permissions of
/etc/netdata. #7244 (knatsakis)
- Made various improvements to the installer script
netdata-installer.sh. #7200 (knatsakis)
- Include go.d.plugin version v0.11.0 #7365 (ilyam8)
Documentation
- Correct versions of FreeNAS that Netdata is available on. #7355 (knatsakis)
- Update plugins.d/README.md. #7335 (OdysLam)
- Note regarding stable vs nightly was accidentally being shown as a code fragment in the installation documentation. #7330 (cakrit)
- Properly link to translated documents from netdata-security.md. #7343 (cakrit)
- Update documentation of the netdata-updater, to properly cover
kickstart-static64.shand
kickstart.shinstallations. #7262 (knatsakis)
- Converted the swagger documentation to OpenAPI3.0. #7257 (amoss)
- Minor corrections to the netdata installer documentation. #7246 (paulkatsoulakis)
- Fix typo in collectors README. #7242 (cherouvim)
- Clarified database engine/RAM in getting started guide. #7225 (joelhans)
- Suggest using
/var/run/netdatafor the unix socket, in running behind nginx documentation. #7206 (CtrlAltDel64)
- Added GA links to new documents. #7194 (joelhans)
- Added a page for metrics archiving to TimescaleDB. #7180 (joelhans)
- Fixed typo in the
contrib/debiandescriptions for
cupsd. #7154 (arkamar)
- Added user information to MySQL Python module documentation. #7128 (prhomhyse)
- Document the results of the spike investigation into CMake. #7114 (amoss)
- Fix to docker-compose+Caddy installation. #7088 (joelhans)
- Fixed broken links and added setup instructions for Telegram health notifications. #7033 (half-duplex)
- Minor grammar change in /web/gui documentation #7363 (eviemsrs)
Other
- Improve Travis build warnings (issue #7189). #7312 (amoss)
- cmocka testing for http requests #7308, #7308, #7264 #7210 (amoss and vlvkobal)
- CI/CD: Prevented nightly jobs from timing out #7238, #7214 (knatsakis)
Bug fixes
- Fixed packagecloud binary installation in Debian 8. #7342 (andyundso)
- Fixed missing libraries in certain compilations, by adding missing trailing backslash to
Makefile.am. #7326 (oxplot)
- Prevented freezes due to isolated CPUs. #7318 (stelfrag)
- Fixed missing streaming when slave has SSL activated. #7306 (thiagoftsm)
- Fixed error 421 in IRC notifications, by removing a line break from the message. #7243 (thiagoftsm)
proc/pagetypeinfocollection could under particular circumstances cause high CPU load. As a workaround, we disabled
pagetypeinfoby default. #7230 (vlvkobal)
- Fixed incorrect memory allocation in
procplugin’s
pagetypeinfocollector. #7187 (thiagoftsm)
- Eliminated cached responses from the postgres collector. #7228 (ilyam8)
- rabbitmq: Fixed
"disk_free": "disk_free_monitoring_disabled"error. #7226 (ilyam8)
- Fixed build with musl standard C library by including
limits.hbefore using
LONG_MAX. #7224 (mniestroj)
- Fixed Apache module not working with letsencrypt certificate by allowing the python
UrlServiceto skip
tls_verifyfor http scheme. #7223 (ilyam8)
- Fixed invalid spikes appearing in certain charts, by improving the incremental counter reset/wraparound detection algorithm. #7220 (mfundul)
- Fixed DNS-lookup performance issue on FreeBSD. #7132 (amoss)
- Fixed handling of the
stableoption, so that the installers and automatic updater respect it. #7083 (knatsakis), #7051 (oxplot)
- Fixed handling of the static binary installer’s handling of the
--auto-updateoption. #7076 (knatsakis)
- Fixed cgroup network interfaces classification on Proxmox 6. #7037 (vakartel)
- Added missing dbengine flags to the installer. #7027 (paulkatsoulakis)
- Fixed issue with unknown variables in alarm configuration expressions always being evaluated to zero. #6984 (thiagoftsm)
- Fixed issue of automatically picking up Pi-hole stats from a Pi-hole instance installed on another device by disabling the default job that collects metrics from. go.d.plugin 289 (ilyam8)
Assets
5
netdatabot released this
Netdata v1.18.1
Release v1.18.1 contains 17 bug fixes, 5 improvements, and 5 documentation updates.
At a glance
Patch release 1.18.1 contains several bug fixes, mainly related to FreeBSD and the binary package generation process.
Netdata can now send notifications to Google Hangouts Chat!
On certain systems, the
slabinfo plugin introduced in v1.18.0 added thousands of new metrics. We decided the collector's usefulness to most users didn't justify the increase in resource requirements. This release disables the collector by default.
Finally, we added a chart under Netdata Monitoring to present a better view of the RAM used by the database engine (dbengine). The chart doesn't currently take into consideration the RAM used for slave nodes, so we intend to add more related charts in the future.
Acknowledgements
We'd like to thank:
- hendrikhofstadt for the Google Hangouts notifications
- stevenh for the awesome zombie process reaper and the fix for the freeipmi collector
- samm-git for the addition of the VMware VMXNET3 driver to the default interfaces list for FreeBSD
- sz4bi for a documentation fix
Improvements
- Disable
slabinfoplugin by default to reduce the total number of metrics collected #7056 (vlvkobal)
- Add dbengine RAM usage statistics #7038 (mfundul)
- Support Google Hangouts chat notifications #7013 (hendrikhofstadt)
- Add CMocka unit tests #6985 (vlvkobal)
- Add prerequisites to enable automatic updates for installations via the static binary (
kickstart-static64.sh) #7060 (knatsakis)
Documentation
- Fix typo in health_alarm_notify.conf #7062 (sz4bi)
- Fix BSD/pfSense documentation #7041 (thiagoftsm)
- Document the structure of the
api/v1/dataAPI responses. #7012 (amoss)
- Tutorials to support v1.18 features #6993 (joelhans)
- Fix broken links in docs #7123 (joelhans)
Bug fixes
- Fix unbound collector timings: Convert recursion timings to milliseconds. #7121 (Ferroin)
- Fix unbound collector unhandled exceptions #7112 (ilyam8)
- Fix upgrade path from v1.17.1 to v1.18.x for deb packages #7118 (knatsakis)
- Fix CPU charts in apps plugin on FreeBSD #7115 (vlvkobal)
- Fix megacli collector binary search and sudo check #7108 (ilyam8)
- Fix missing packages, by running the triggers for DEB and RPM package build in separate stages #7105 (knatsakis)
- Fix segmentation fault in FreeBSD when statsd is disabled #7102 (vlvkobal)
- Fix Clang warnings #7090 (thiagoftsm)
- Fix python.d error logging: change chart suppress msg level from ERROR to INFO #7085 (ilyam8)
- Fix freeipmi update frequency check: was warning that 5 was too frequent and it was setting it to 5. #7078 (stevenh)
- Fix alarm configurations not getting loaded, via better handling of chart names with special characters #7069 (thiagoftsm)
- Fix dbengine not working when
mmapfails - mostly with BSD kernels #7065 (mfundul)
- Fix FreeBSD issue due to incorrect size of a zeroed block #7061 (vlvkobal)
- Don't write HTTP response 204 messages to the logs #7035 (vlvkobal)
- Fix build when CMocka isn't installed #7129 (vlvkobal)
- FreeBSD plugin: Add VMware VMXNET3 driver to the default interfaces list #7109 (samm-git)
- Prevent zombie processes when a child is re-parented to netdata when its running in a container , by adding child process reaper #7059 (stevenh)
Assets
5
netdatabot released this
Netdata v1.18.0
Release v1.18.0 contains 5 new collectors, 19 bug fixes, 28 improvements, and 20 documentation updates.
At a glance
The database engine is now the default method of storing metrics in Netdata. You immediately get more efficient and configurable long-term metrics storage without any work on your part. By saving recent metrics in RAM and "spilling" historical metrics to disk for long-term storage, the database engine is laying the foundation for many more improvements to distributed metrics.
We even have a tutorial on switching to the database engine and getting the most from it. Or, just read up on how performant the database engine really is.
Both our
python.d and
go.d plugins now have more intelligent auto-detection by periodically dump a list of active modules to disk. When Netdata starts, such as after a reboot, the plugins use this list of known services to re-establish metrics collection much more reliably. No more worrying if the service or application you need to monitor starts up minutes after Netdata.
Two of our new collectors will help those with Hadoop big data infrastructures. The HDFS and Zookeeper collection modules come with essential alarms requested by our community and Netdata's auto-detection capabilities to keep the required configuration to an absolute minimum. Read up on the process via our HDFS and Zookeeper tutorial.
Speaking of new collectors—we also added the ability to collect metrics from SLAB cache, Gearman, and vCenter Server Appliances.
Before v1.18, if you wanted to create alarms for each dimension in a single chart, you need to write separate entities for each dimension—not very efficient or user-friendly. New dimension templates fix that hassle. Now, a single entity can automatically generate alarms for any number of dimensions in a chart, even those you weren't aware of! Our tutorial on dimension templates has all the details.
v1.18 brings support for installing Netdata on offline or air-gapped systems. To help users comply with strict security policies, our installation scripts can now install Netdata using previously-downloaded tarball and checksums instead of downloading them at runtime. We have guides for installing offline via
kickstart.sh or
kickstart-static64.sh in our installation documentation. We're excited to bring real-time monitoring to once-inaccessible systems!
Acknowledgements
Our thanks go to:
- Saruspete for several contributions, including the new
slabinfocollector, that monitors SLAB cache mechanism metrics.
- agronick for the new Gearman worker statistics
collector
- OneCodeMonkey for a bug fix in the alarm notification script.
- lets00 for providing a Portuguese (Brazil) translation of the installation instructions
- mbarper and davent for improvements to the uninstaller.
- n0coast for a documentation fix.
Improvements
Database engine
- Make dbengine the default memory mode #6977 (mfundul)
- Increase dbengine default cache size #6997 (mfundul)
- Reduce overhead during write IO #6964 (mfundul)
- Detect deadlock in dbengine page cache #6911 (mfundul)
- Remove hard cap from page cache size to eliminate deadlocks. #7006 (mfundul)
New Collectors
- SLAB cache mechanism (Saruspete)
- Gearman worker statistics
- vCenter Server Appliance
- Zookeeper servers
- [Hadoop Distributed File System (HDFS) nodes] ()
Collector improvements
- rabbitmq: Add vhosts message metrics from
/api/vhosts#6976 (ilyam8)
- elasticsearch: collect metrics from _cat/indices #6965 (ilyam8)
- mysql: collect galera cluster metrics #6962 (ilyam8)
- Allow configuration of the python.d launch command from netdata.conf #6781 (amoss)
- x509check: smtp cert check support (netdata/go.d.plugin#261)
- dnsmasq_dhcp: respect conf-dir,conf-file,dhcp-host options (netdata/go.d.plugin#268)
- plugin: respect previously running jobs after plugin restart (#6499)
- httpcheck: add current state duration chart (netdata/go.d.plugin#270 )
- springboot2: fix context (netdata/go.d.plugin#263)
Health
- Enable alarm templates for chart dimensions #6560 (thiagoftsm)
- Center the chart on the proper chart and time whenever an alarm link is clicked #6391 (thiagoftsm)
Installation/Packages
- netdata/installer: Add support for offline installations using
kickstart.shor
kickstart-static64.sh#6693 (paulkatsoulakis)
- Allow netdata service installation, when docker runs systemd #6987 (paulkatsoulakis)
- Make spec file more consistent with version dependencies #6948 (paulkatsoulakis)
- Fix broken links on web files, for DEB #6930 (paulkatsoulakis)
- Introduce separate CUPS package for DEB #6724 and RPM #6700 distributions. (paulkatsoulakis). Do not build CUPS plugin subpackage on CentOS 6 and CentOS 7 #6926 (knatsakis)
- Various Improvements in the package release CI/CD flow #6914 #6905 #6842 #6837 #6838 #6834 (paulkatsoulakis), #6900 (cakrit)
- Remove RHEL7 - i386 binary distribution, until bug #6849 is resolved #6902 (paulkatsoulakis)
- Bring on board two scripts that build
libuvand
judyfrom source #6850 (paulkatsoulakis)
Documentation
- Add Portuguese (Brazil) translation of the installation instructions #16(lets00), #7004 (cakrit)
- Fix broken links found via linkchecker #6983 (joelhans)
- Clarification on configuring notification recipients #6961 (cakrit)
- Fix Remark Lint for READMEs in database #6942, contrib #6921, daemon README #6920 and backends #6917 (prhomhyse)
- Suggest using /run or /var/run for the unix socket #6916 (cakrit)
- Improve documentation for the SNMP collector #6915 (cakrit)
- Update docs for offline install #6884 (paulkatsoulakis)
- Remove Dollar sign from Bash code in documentation and fix remark-lint warnings #6880 (prhomhyse)
- Markdown syntax fixes for MDX parser #6877 (joelhans)
- Update python.d module checklist to match the current paths and build system. #6874 (Ferroin)
- Add instructions for simple SMTP transport #6870 (cakrit)
- Add example for prometheus archiving source parameter #6869 (cakrit)
- Fix broken links in the standard web dashboard doc #6854 (prhomhyse)
- Overhaul of Getting started guide #6811 (joelhans)
- NPM Packages version update #6801 (prhomhyse)
- Update suggested
grepcommand in “high performance netdata” to be more specific #6794 (n0coast)
Other
- API: Include
familyinto the
allmetricsJSON response #6966 (ilyam8)
- API: Add fixed width option to badges #6903 (underhood)
- Allow hostnames in Access Control Lists #6796 (amoss)
- Functional test improvements for web and alarms tests #6783 (thiagoftsm)
Bug fixes
- Fix issue error in alarm notification script, when executed without any arguments #7003 (OneCodeMonkey)
- Fix Coverity warnings #6992 #6970 #6941 #6797 (thiagoftsm), #6909 (cakrit)
- Fix dbengine consistency when a writer modifies a page concurrently with a reader querying its metrics #6979 (mfundul)
- Fix memory leak on netdata exit #6945 (vlvkobal)
- Fix for missing boundary data points in certain cases #6938 (mfundul)
- Fix
unhandled exceptionlog warnings in the
python.dcollector orchestrator
start\_job#6928 (ilyam8)
- Fix CORS errors when accessing the health management API, by permitingt
x-auth-tokenin
Access-Control-Allow-Headers#6894 (cakrit)
- Fix misleading error log entries
RRDSET: chart name 'XXX' on host 'YYY' already exists, by changing the log level for chart updates #6887 (vlvkobal)
- Properly resolve all Kubernetes container names #6885 (cakrit)
- Fix LGTM warnings #6875 (jacekkolasa)
- Fix agent UI redirect loop during cloud sign-in #6868 (jacekkolasa)
- Fix
/var/lib/netdata/registrygetting left behind after uninstall #6867 (davent)
- Fix python.d.plugin bug in parsing configuration files with no explicitly defined jobs #6856 (ilyam8)
- Fix potential buffer overflow in the web server #6817 (amoss)
- Fix netdata group deletion on linux for uninstall script #6645 (mbarper)
- Various
cppcheckfixes #6386 (ac000)
- Fix crash on FreeBSD due to do_dev_cpu_temperature stack corruption #7014 (samm-git)
- Fix handling of illegal metric timestamps in database engine #7008 (mfundul)
- Fix a resource leak #7007 (vlvkobal)
- Fix rabbitmq collector error when no vhosts are available. #7018 (mfundul)
Assets
5
netdatabot released this
Netdata v1.17.1
Release v1.17.1 contains 2 bug fixes, 6 improvements, and 2 documentation updates.
At a glance
The main reason for the patch release is an essential fix to the repeating alarm notifications we introduced in v1.17.0. If you enabled repeating notifications, Netdata would not then send CLEAR notifications for the selected alarms.
The release also includes a significant improvement to Netdata's auto-detection capabilities, especially after a system restart. Netdata now remembers which
python.d plugin jobs were successfully collecting data the last time it was running, and retries to run those jobs for 5 minutes before giving up. As a result, you no longer have to worry if your system starts Netdata before the monitored services have had a chance to start properly. We will complete the same improvement for
go.d plugins in v1.18.0.
We also made some improvements to our binary packages and added a neat sample custom dashboard that can show charts from multiple Netdata agents.
Acknowledgements
Our thanks go to:
- tnyeanderson for
Dash.html, the custom dashboard that can show charts from multiple hosts.
- qingkunl for improving the charts auto-scaling feature with nanosec and num units.
- Fohdeesha for documentation improvements
- Saruspete for improving debugging capabilities with tags for threads and his significant involvement in many other issues
Improvements
Binary packages
- netdata/packaging: Trigger stable package generation upon release process #6766 (paulkatsoulakis)
- netdata/packaging: Fix ubuntu/xenial runtime dependencies #6825 (paulkatsoulakis)
- netdata/packaging: Remove fedora/28, which is no longer available #6808 (paulkatsoulakis)
- netdata/packaging: Override control file for debian/buster #6777 (paulkatsoulakis)
GUI
- Expand dashboard auto-scaling and convertible units. Added two more units that allow auto-scaling and conversion: nanoseconds and num. #5920 (qingkunl)
Collector improvements
Documentation
- Fix pfsense instructions and links #6768 (Fohdeesha)
- Add high level explanation of dashboard contents #6648 (joelhans)
Other
- Update cache hashes for js and css #6756 (jacekkolasa)
- Provide a tag to identify the thread in the error messages. #6745 (Saruspete)
- Add sample multi-server dashboard
dash.html#6603 (tnyeanderson)
- Replace hard-coded HTTP response codes #6595 (thiagoftsm)
Bug fixes
- Fix clear notifications for repeating alarms #6638 (thiagoftsm)
- Stop
configure.acfrom linking against dbengine and https libraries when dbengine or https are disabled #6658 (mfundul)
Assets
5
netdatabot released this
Release v1.17.0 contains 38 bug fixes, 33 improvements, and 20 documentation updates.
At a glance
You can now change the data collection frequency at will, without losing previously collected values. A major improvement to the new database engine allows you not only to store metrics at variable granularity, but also to autoscale the time axis of the charts, depending on the data collection frequencies used during the presented time.
You can also now monitor VM performance from one or more vCenter servers with a new VSphere collector. In addition, the
proc plugin now also collects ZRAM device performance metrics and the
apps plugin monitors process uptime for the defined process groups.
Continuing our efforts to integrate with as many existing solutions as possible, you can now directly archive metrics from Netdata to MongoDB via a new backend.
Netdata badges now support international (UTF8) characters! We also made our URL parser smarter, not only for international character support, but also for other strange API queries.
We also added
.DEB packages to our binary distribution repositories at Packagecloud, a new collector for Linux zram device metrics, and support for plain text email notifications.
This release includes several fixes and improvements to the TLS encryption feature we introduced in v1.16.0. First, encryption slave-to-master streaming connections wasn't working as intended. And second, our community helped us discover cases where HTTP requests were not correctly redirected to HTTPS with TLS enabled. This release mitigates those issues and improves TLS support overall.
Finally, we improved the way Netdata displays charts with no metrics. By default, Netdata displays charts for disks, memory, and networks only when the associated metrics are not zero. Users could enable these charts permanently using the corresponding configuration options, but they would need to change more than 200 options. With this new improvement, users can enable all charts with zero values using a single, global configuration parameter.
Acknowledgements
Our thanks go to:
- Steve8291 for all his help across the board!
- alpes214 for improvements in health monitoring
- fun04wr0ng for fixing a bug in the
nfacctplugin
- RaZeR-RBI for the ZRAM collector module
- underhood for the UTF-8 parsing fixes in badges, that gave us support for internationalized badges
- Ferroin]() for improving the python.d collectors handling of disconnected sockets
- dex4er for improving our OS detection code
- knatsakis for his help in our CI/CD pipeline
- sunflowerbofh for
.gitignorefixes
- Cat7373 for fixing some issues with the
spigotmccollector
Improvements
Database engine
- Variable granularity support for data collection #6430 (mfundul)
- Added tips on the UI to encourage users to try the new DB Engine, when they reach the end of their metrics history #6711 (jacekkolasa)
Binary packages
- Added nightly generation of RPM/DEB amd64 packages #6675 (paulkatsoulakis)
- Provided built-in support for the prometheus remote write API in our packages #6480 (paulkatsoulakis)
- Documented distribution support matrix and functionality availability #6552 (paulkatsoulakis)
Health
- Added support for plain text only email notifications #6485 (leo-lb)
- Started showing “hidden” alarm variables in the responses of the
chartand
dataAPI calls (#6054) #6615 (alpes214)
- Added a new API call for alarm status counters, as a first step towards badges that will show the total number of alarms #6554 (alpes214)
Security
- Added configurable default locations for trusted CA certificates #6549 (thiagoftsm)
- Added safer way to get container names #6441 (ViViDboarder)
- Added SSL connection support to the python mongodb collector #6546 (ilyam8)
New collectors
- VSphere collector go.d.plugin PR241 #6572 (ilyam8)
Collector improvements
- rethinkdb collector new driver support #6431 (ilyam8)
- The apps plugin now displays process uptime charts #6654 (vlvkobal)
- Added ZRAM device metrics to the
proc.plugin#6276 #6424 (RaZeR-RBI)
Archiving
Documentation
- Add a statement about permissions for the diskspace plugin #6474 (vlvkobal)
- Improved the running behind Nginx guide #6466 (prhomhyse)
- Add more supported backends to the documentation #6443 (vlvkobal)
- Removed Ventureer from the list of demo sites #6442 (paulkatsoulakis)
- Updated docs health monitoring and health management api documentation #6435 (jghaanstra)
- Fixed issues in HTML docs generation, causing the hyperlink checks to function improperly #6433 (cakrit)
- New 'homepage' for documentation site #6428 (joelhans)
- Styling improvements to documentation #6425 (joelhans)
- Add documentation for binary packages, plus draft table for distributions support #6422 (paulkatsoulakis)
- Update netdata installation dependencies #6421 (paulkatsoulakis)
- Added better explanation of nightly and stable releases #6388 (joelhans)
- Add netdata haproxy documentation page #6454 (johnramsden)
- Added Netdata Cloud documentation #6476 (joelhans)
- Removed text about nightly version #6534 (joelhans)
- Provided documentation style guide & build instructions #6563 (joelhans)
- Install Netdata with Docker #6596 (prhomhyse)
- Fixed typos in: 'README.md' file. #6604 (coffeina)
- Change "netdata" to "Netdata" in all docs #6621 (joelhans)
- Fixed Markdown Lint warnings #6664 (prhomhyse)
- Improved Apache reverse proxy documentation on Content Security Policy #6667 (sunflowerbofh)
Other
- Updated our CLA, clarifying our intention to keep netdata FOSS #6504 (cakrit)
- Updated terms of use for U.S. legal reasons #6631 (cakrit)
- Updated logos in the infographic and remaining favicons #6417 (cakrit)
- SSL vs. TLS consistency and clarification in documentation #6414 (joelhans)
- Update Running-behind-apache.md #6406 (Steve8291)
- Fix Web API Health documentation #6404 (thiagoftsm)
- Added apps grouping debug messages #6375 (vlvkobal)
- GCC warning and linting improvements #6392 (ac000)
- Minor code readability changes #6539 (underhood)
- Added global configuration option to show charts with zero metrics #6419 (vlvkobal)
- Improved the way we parse HTTP requests, so we can avoid issues from edge cases #6247 #6714 (thiagoftsm)
- Build DEB and RPM packages in parallel #6579 (knatsakis)
- Updated package version requirements for LZ4 and libuv #6607 (mfundul)
- Improved system OS detection for RHEL6 and Mac OS X #6612 (dex4er)
- .travis.yml: Remove 'sudo: true' as it is now deprecated #6624 (knatsakis)
- Modified the documentation build process to accept <> around links in markdown #6646 (cakrit)
- Fixed spigotmc module typos in comments. #6680 (Cat7373)
Bug fixes
- Fixed the snappy library detection in some versions of OpenSuSE and CentOS #6479 (vlvkobal)
- Fixed sensor chips filtering in python sensors collector #6463 (ilyam8)
- Fixed user and group names in apps.plugin when running in a container, by mounting and reading
/etc/passwd#6472 (vlvkobal)
- Fixed possible buffer overflow in the JSON parser used for health notification silencers #6460 (thiagoftsm)
- Fixed handling of corrupted DB files in dbengine, that could cause netdata to not start properly (CRC and I/O error handling) #6452 (mfundul)
- Stopped docs icon from linking to streaming page instead of docs root #6445 (joelhans)
- Fixed an issue with Netdata snapshots that could sometimes cause a problem during import. #6400 (jacekkolasa)
- Fixed bug that would cause netdata to attempt to kill already terminated threads again, on shutdown. #6387 (emmrk)
- Fixed out of memory (12) errors by reimplementing the myopen() function family #6339 (mfundul)
- Fixed wrong redirection of users signing in after clicking Nodes #6544 (jacekkolasa)
- Fixed python.d smartd collector increasing CPU usage #6540 (ilyam8)
- Fixed missing navigation arrow in Documentation #6533 (joelhans)
- Fixed mongodb python collector stock configuration mistake, by changing
passwordto
pass#6518 (ilyam8)
- Fixed broken left navbar links in translated docs #6505 (cakrit)
- Fixed handling of UTF8 characters in badges and added International Support to the URL parser #6426 (underhood)
- Fixed nodes menu sizing (responsive) #6455 (builat)
- Fixed issues with http redirection to https and streaming encryption #6468 (thiagoftsm)
- Fixed broken links to
arcstat.pyand
arc_summary.pyin dashboard_info.js #6461 (TheLovinator1)
- Fixed bug with the nfacct plugin that resulted in missing dimensions from the charts #6098 (fun04wr0ng)
- Stopped anonymous stats from trying to write a log under
/tmp#6491 (cakrit)
- Fixed a problem with
edit-config, the configuration editor, not being able to run in MacOS. We no longer deliver edit-config as part of the distribution tarball, so that it can get generated with proper configuration during installation .#6507 (paulkatsoulakis)
- Fixed issue with the netdata-updater that caused it not to run properly in static64 installations. #6520 (paulkatsoulakis)
- Fixed some yamllint errors in our Travis configuration #6526 (knatsakis)
- Properly delete obsolete dimensions for inactive disks in smartd_log #6547 (ilyam8)
- Fixed
.environmentfile getting overwritten, by moving tarball checksum information into lib dir of netdata #6555 (paulkatsoulakis)
- Fixed handling of disconnected sockets in unbound python.d collector. #6561 (Ferroin)
- Fixed crash in malloc #6583 (thiagoftsm)
- Fixed installer error
undefined reference to LZ4_compress_default#6589 (mfundul)
- Fixed issue with mysql collector that resulted in showing only a single slave_status chart, regardless of the number of replication channels #6597 (ilyam8)
- Fixed installer issue that would automatically enable the netdata service, even, if it was previously disabled #6606 (paulkatsoulakis)
- Fixed a segmentation fault in backends #6627 (vlvkobal)
- Fixed spigotmc plugin bugs #6635 (Cat7373)
- Fixed installer error when running
kickstart.shas a non-privileged user #6642 (paulkatsoulakis)
- Fixed issue causing OpenSSL libraries to not be found on gentoo #6670 (paulkatsoulakis)
- Fixed dbengine 100% CPU usage due to corrupted transaction payload handling #6731 (mfundul)
- Fixed wrong default paths in certain installations #6678 (paulkatsoulakis)
- Fixed exact path to netdata.conf in .gitignore #6709 (sunflowerbofh)
- Fixed static64 installer bug that resulted in always overwriting configuration #6710 (paulkatsoulakis)
Thanks to the community for their help!
Assets
5
netdatabot released this
Release v1.16.0 contains 40 bug fixes, 31 improvements and 20 documentation updates
At a glance
Binary distributions. To improve the security, speed and reliability of new netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and we’ll have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
- Our stable distributions are at netdata/netdata @ packagecloud.io
- The nightly builds are at netdata/netdata-edge @ packagecloud.io
Netdata now supports SSL encryption! You can secure the communication to the web server, the streaming connections from slaves to the master and the connection to an openTSDB backend.
This version also brings two long-awaited features to netdata’s health monitoring:
- The health management API introduced in v1.12 allowed you to easily disable alarms and/or notifications while netdata was running. However, those changes were not persisted across netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new LIST command of the API allows you to view at any time which alarms are currently disabled or silenced.
- A way for netdata to repeatedly send alarm notifications for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.
As always, we’ve introduced new collectors, 5 of them this time.
- Of special interest to people with Windows servers in their infrastructure is the WMI collector, though we are fully aware that we need to continue our efforts to do a proper port to Windows.
- The new
perfplugin collects system-wide CPU performance statistics from Performance Monitoring Units (PMU) using the
perf_event_open()system call. You can read a wonderful article on why this is useful here.
- The other three are collectors to monitor Dnsmasq DHCP leases, Riak KV servers and Pihole instances.
Finally, the DB Engine introduced in v1.15.0 now uses much less memory and is more robust than before.
Acknowledgements
As you’ll see in the detailed list below, once again we’ve had great help from our contributors.
- Steve8291 was helping everywhere
- apardyl added useful new alarms and helped with documentation
- jchristgit wrote the Riak KV collector
- Saruspete made improvements to the freeipmi plugin
- kam1kaze has added new charts to the python mysql collector
- akwan and mbarper improved the application monitoring, with new process groupings
- nodiscc helped with bug and documentation fixes
- dankohn) helped with the documentation
- andvgal added an amazing configuration to help us run proper lint checks on our markdown files
- octomike, Danamir, mbarper, Wing924, n0coast and toofar delivered bug fixes
- josecv helped improve the Kubernetes helm chart.
We can't stress enough the immense help we get just from users creating an issue in GitHub, helping us identify the root cause and validate the change in their infrastructure. Unfortunately, we are not able to list all of them here, but their contribution is invaluable.
Improvements
Binary packages
- Introduced automatic binary packages generation and delivery for RPM types (Phase 1) #6223 #6369 (paulkatsoulakis)
Health
- Easily disable alarms, by persisting the silencers configuration #6274 #6360 (thiagoftsm)
- Repeating alarm notifications #6309 (thiagoftsm) and (kamcpp)
- Simplified the health cmdapi tester - no setup/cleanup needed #6210 (cakrit)
- Αdd last_collected alarm to the x509check collector #6139 (ilyam8)
- New alarm for abnormally high number of active processes. #6116 (apardyl)
Security
- SSL support in the web server and streaming/replication #5956 (thiagoftsm)
- Support encrypted connections to OpenTSDB backends #6220 (thiagoftsm)
- Show the security policy directly from GitHub #6163 #6166 (cakrit)
New collectors
- Go.d collector modules for WMI, [Dnsmasq DHCP leases)() and Pihole (ilyam8)
- Riak KV instances collector #6286 (jchristgit)
- CPU performance statistics using Performance Monitoring Units (PMU) via the
perf_event_open()system call. (perf plugin) #6225 (vlvkobal)
Collector improvements
- Handle different sensor IDs for the same element in the freeipmi plugin #6296 (Saruspete)
- Increase the cpu_limit chart precision in cgroup plugin #6172 (vlvkobal)
- Added
userstatsand
deadlockscharts to the python mysql collector #6118 #6115 (kam1kaze)
- Add perforce server process monitoring to the apps plugin #6064 (akwan)
Backends
DB engine improvements
- Reduced memory requirements by 40-50% #6134 (mfundul)
- Reduced the number of pages needed to be stored and indexed when using
memory mode = dbengine, by adding empty page detection #6173 (mfundul)
Rebranding
- Updated the netdata logo and changed links to point to the new website #6359 #6398 (cakrit), #6396 (ivorjvr), #6389 (joelhans)
Documentation
- Improve documentation about file descriptors and systemd configuration. #6372 (mfundul)
- Update the documentation on charts with zero metrics #6314 (vlvkobal)
- Document that that in versions before 1.16, the plugins.d directory may be installed in a different location in certain OSs #6301 (cakrit)
- Remove single and multi-threaded web server configuration instructions #6291 (nodiscc)
- Add more info on the
stream.confoption
health enabled by default = auto#6281 (cakrit)
- Add comments about AWS SDK for C++ installation #6277 (vlvkobal)
- Fix on the installation readme regarding the supported systems (first came RedHat, then the others) #6271 (paulkatsoulakis)
- Update the new dbengine documentation #6264 (mfundul)
- Remove CNCF logo and TOC presentation reference #6234 (dankohn)
- Added code style guidance to CONTRIBUTING #6212 (cakrit)
- Visibility fix for anonymous statistics #6208 (cakrit)
- smartd documentation improvements #6207 (cakrit), #6203 (Steve8291)
- Made custom notification's instructions clearer #6181 (cakrit)
- Fix typo in the web server README #6146 (cakrit)
- Registry documentation fixes #6144 (cakrit)
- Changed 'netdata' to 'Netdata' in /docs/ and /README.md #6137 (apardyl)
- Update installer readme with OpenSUSE dependencies #6111 (mfundul)
- Fixed minor typos in the daemon configuration documentation #6090 (Steve8291)
- Mention anonymous statistics in additional places in the docs #6084 (cakrit)
- Local remark-lint checks and autofix support #5898 (andvgal)
Other
- Pass the the
cloud base urlparameter to the notifications mechanism, so that modifications to the configuration are respected when creating the link to the alarm #6383 (ladakis)
- Added a
.gitattributesfile to improve
git difffor C files #6381 (ac000)
- Improved logging, to be able to trace the
CRITICAL: main[main] SIGPIPE received.error #6373 (vlvkobal)
- Modify the limits of the stale bot, to close stale questions/discussions in GitHub faster #6297 (ilyam8)
- Internal CI/CD improvements #6282 #6268 (paulkatsoulakis)
- netdata/packaging: Add more distribution validations #6235 (paulkatsoulakis)
- Move call to send_statistics later, to get more telemetry events from docker containers #6113 (vlvkobal), #6096 (cakrit)
- Use github templating mechanisms to classify issues when they are created #5776 (paulfantom)
Bug fixes
- Fixed
ram_availablealarm #6261 (octomike)
- Stop monitoring
/devand
/runin the disk space and inode usage charts #6399 (vlvkobal)
- Fixed the monitoring of the “time” group of processes #6397 (mbarper)
- Fixed compilation error
PERF_COUNT_HW_REF_CPU_CYCLES' undeclared herein old Linux kernels (perf plugin) #6382 (vlvkobal)
- Fixed autodetection for openldap on Debian (apps.plugin) #6364 (nodiscc)
- Fixed compilation error on CentOS 6 (nfacct plugin) #6351 (vlvkobal)
- Fixed invalid XML page error (tomcat plugin) #6345 (Danamir)
- Remove obsolete monit metrics #6340 (ilyam8)
- Fixed
Failed to parseerror in adaptec_raid #6338 (ilyam8)
- Fixed
cluster_health_nodesand
cluster_stats_nodescharts in the elasticsearch collector #6311 (Wing924)
- A modified slave chart's "name" was not properly transferred to the master (streaming) #6304 (vlvkobal)
- Netdata could run out of file descriptors when using the new DB engine #6303 (mfundul)
- Fixed UI behavior when pressing the
Endkey #6294 (thiagoftsm)
- Fixed UI link to check the configuration file, to open in a new tab #6294 (thiagoftsm)
- Fixed files not found during installation, due to different than expected location of the
libexecdirdirectory #6272 (paulkatsoulakis)
- Prevented
Error: 'module' object has no attribute 'Retry'messages from python collectors, by enforcing minimum version check for the
UrlServicelibrary #6263 (ilyam8)
- Fixed typo that causes nfacct.plugin log messages to incorrectly show
freeipmi#6260 (vlvkobal)
- Fixed netdata/netdata docker image failure, when users pass a PGID that already exists on the system #6259 (paulkatsoulakis)
- The daemon could get stuck during collection or during shutdown, when using the new dbengine. Reduced new dbengine IO utilization by forcing page alignment per dimension of chart. #6240 (mfundul)
- Properly handle timeouts/no response in dns_query_time python collector #6237 (n0coast)
- When a collector restarted after having stopped for a long time, the new dbengine would consume a lot of CPU resources. #6216 (mfundul)
- Fixed error
Assertionold_state & PG_CACHE_DESCR_ALLOCATED' failed` of the new dbengine. Eliminated a page cache descriptor race condition #6202 (mfundul)
- tv.html failed to load the three left charts when accessed via https. Turn tv.html links to https #6198 (cakrit)
- Change print level from error to info for messages about clearing old files from the database#6195 (mfundul)
- Fixed warning regarding the x509check_last_collected_secs alarms. Changed the template update frequency to 60s, to match the chart’s update frequency #6194 (ilyam8)
\r\nas per the RFC #6187 (toofar)
- Some log entries would not be caught by the python web_log plugin. Fixed the regular expressions #6138 #6180 (ilyam8)
- Corrected the date used in pushbullet notifications #6179 (cakrit)
- Fixed FATAL error when using the new dbengine with no direct I/O support, by falling back to buffered I/O #6174 (mfundul)
- Fixed compatibility issues with varnish v4 (varnish collector) #6168 (ilyam8)
- The total number of disks in mdstat.XX_disks chart was displayed incorrectly. Fixed the "inuse" and "down" disks stacking. #6164 (vlvkobal)
- The config option --disable-telemetry was being checked after restarting netdata, which means that we would still send anonymous statistics the first time netdata was started. #6127 (cakrit)
- Fixed apcupsd collector errors, by passing correct info to the run function. #6126 (Steve8291)
- apcupsd and libreswan were not enabled by default #6120 (Steve8291)
- Fixed incorrect module name: energi to energid #6112 (Steve8291)
- The nodes view did not work properly when a reverse proxy was configured to access netdata via paths containing subpaths (e.g. myserver/netdata) #6093 (gmosx)
- Fix error message
PLUGINSD : cannot open plugins directory#6080 #6089 (Steve8291)
- Corrected invalid links to web_log.conf that appear on the agent UI #6087 (cakrit)
- Fixed ScaleIO collector endpoint paths go.d PR 226 ilyam8
- Fixed web client timeout handling in the go.d plugin httpcheck collector go.d PR 225 ilyam8
Assets
5
netdatabot released this
Release v1.15.0 contains 11 bug fixes and 30 improvements.
At a glance
We are very happy and proud to be able to include two major improvements in this release: The aggregated node view and the new database engine.
Aggregated node view
The No. 1 request from our community has been a better way to view and manage their Netdata installations, via an aggregated view. The node menu with the simple list of hosts on the agent UI just didn't do it for people with hundreds, or thousands of instances. This release introduces the node view, which uses the power of Netdata Cloud to deliver powerful views of a Netdata-based monitoring infrastructure.
You can read more about Netdata Cloud and the future of netdata here.
New database engine
Historically, Netdata has required a lot of memory for long-term metrics storage. To mitigate this we've been building a new DB engine for several months and will continue improving until it can become the default
memory mode for new Netdata installations. The version included in release v1.15.0 already permits longer-term storage of compressed data and we'll continue reducing the required memory in following releases.
Other major additions
We have added support for the AWS Kinesis backend and new collectors for OpenVPN, the Tengine web server, ScaleIO (VxFlex OS), ioping-like latency metrics and Energi Core node instances.
We now have a new, "text-only" chart type, cpu limits for v2 cgroups, docker swarm metrics and improved documentation.
We continued improving the Kubernetes helmchart with liveness probes for slaves, persistence options, a fix for a
Cannot allocate memory issue and easy configuration for the kubelet, kube-proxy and coredns collectors.
Finally, we built a process to quickly replace any problematic nightly builds and added more automated CI tests to prevent such builds from being published in the first place.
Acknowledgements
Our heartfelt gratitude for this release goes to the following people:
- @kam1kaze for help with Kubernetes, a fix for the Docker image and documentation improvements.
- @andvgal for the Energi Core daemon collector and the improvement of the python.d plugin.
- @skrzyp1 for improving cgroup monitoring.
- @Daniel15 for the much sought-after "text-only" new chart type.
- @Fohdeesha, @SahAssar, and @smonff for improving the documentation.
- @etienne-napoleone, @karuppiah7890 and @varyumin for their contributions to the Kubernetes helm chart.
Improvements
- Support for aggregate node view #5902 (gmosx)
- Database engine #5282 (mfundul)
- New collector modules:
- Go.d collectors for OpenVPN, the Tengine web server and ScaleIO (VxFlex OS) instances (ilyam8)
- Monitor disk access latency like ioping does #5725 (vlvkobal)
- Energi Core daemon monitoring, suits other Bitcoin forks #5894 (andvgal)
- Collector improvements:
- Support the AWS Kinesis backend for long-term storage #5914 (vlvkobal)
- Add a new "text-only" chart renderer #5971 (Daniel15)
- Packaging and CI improvements:
- We can now fix more quickly any problematic published builds via a new manual deployment procedure #5899 (paulkatsoulakis)
- We added more tests to our nightly builds, to catch more errors before publishing images #5918 (paulkatsoulakis)
- API Improvements:
- Kubernetes helmchart improvements:
- Added the init container, where sysctl params could be managed, to bypass the
Cannot allocate memoryissue #18 (kam1kaze)
- Better startup/shutdown of slaves and reduced memory usage with liveness/readiness probes and default memory mode none #19 (cakrit)
- Added the option of overriding the default settings for kubelet, kubeproxy and coredns collectors via values.yaml #24 (cakrit)
- Make the use of persistent volumes optional, add
apiVersionto fix linting errors and correct the location of the
envfield #22, #23 (karuppiah7890)
- Fix incorrect parameter names in the README #24 (etienne-napoleone)
- Documentation improvements:
Bug fixes
- Prowl notifications were not being sent, unless another notification method was also active #6022 (cakrit)
- Fix exception handling in the python.d plugin #5997 (ilyam8)
- The
nodeapplications group did not include all node processes. #5962 (jonfairbanks)
- Installation would show incorrect message "FAILED Cannot install netdata init service." in some cases #5947 (paulkatsoulakis)
- The nvidia_smi collector displayed incorrect power usage #5940 (ilyam8)
- The python.d plugin would sometimes hang, because it lacked a connect timeout #5911 (ilyam8)
- The mongodb collector raised errors due to various KeyErrors #5931 (ilyam8)
- The smartd_log collector would show incorrect temperature values #5923 (ilyam8)
- charts.d plugins would fail on docker, when using the
timeoutcommand #5938 (paulkatsoulakis)
- Docker image had plugins not executable by user netdata #5917 (paulkatsoulakis)
- Docker image was missing the
lsnscommand, used to match network interfaces to containers #1 (kam1kaze)
Assets
5
netdatabot released this
Release 1.14 contains 14 bug fixes and 24 improvements.
At a glance.
Acknowledgements
Our contributors kicked the ball out of the park this time. Our thanks go to the following people:
@ekartsonakis for the excellent addition of TLS support to the OpenLDAP collector
@Wing924 whose cat apparently leaves him enough time to help us with springboot2 and a lot more!
@huww98 for his contribution to the NVIDIA SMI plugin.
@varyumin for his help on the Kubernetes helm chart.
@skrzyp1 for the very significant addition of cgroup v2 support
@hsegnitz for his contribution to the web server log plugin.
@archisgore for the quick fixes to the Polyverse-enabled docker image.
@tctovsli for his Rocket Chat notifications improvements.
@JoeWrightss and @vinyasmusic for not letting us get away with spelling mistakes.
@andvgal for the addition to the MongoDB collector.
@piiiggg for the apache proxy documentation fix
@Ferroin for general awesomeness.
Bug Fixes
- Fixed cases where the netdata version produced by the binary or the configure tools of the source code was wrong. Instead of getting something like
netdata-v1.14.0-rc0-39a9sf9gwe would get a
netdata-39a9sf9g. #5860 (paulkatsoulakis)
- Fixed unexpected crashes of the python plugin on macOS, caused by new security changes made in High Sierra. #5838 (ilyam8)
- Fixed problem autodetecting failed jobs in python.d plugin. It now properly restarts jobs that are being rechecked, as soon as they are able to run. #5837 (ilyam8)
- CouchdDB monitoring would stop sometimes with an exception. Fixed the unhandled exception causing the issue. #5833 (ilyam8)
- The netdata api deliberately returned http error 400 when netdata ran in memory mode none. Modified the behavior to return responses, regardless of the memory mode #5819 (cakrit)
- The python.d plugin sometimes does not receive
SIGTERMwhen netdata exits, resulting in zombie processes. Added a heartbeat so that the process can exit on
SIGPIPE. #5797 (ilyam8)
- The new SMS Server Tools notifications did not handle errors well, resulting in cryptic error messages. Improved error handling. #5770 (cakrit)
- The installers would crash on some FreeBSD systems, because
sha256sumused by the installers is not available on all FreeBSD installations. Modified the installers to properly support FreeBSD. #5760 (paulkatsoulakis)
- Running netdata behind a proxy in FreeBSD did not work, when using UNIX sockets. Added special handling of UNIX sockets for FreeBSD. #5756 (vlvkobal)
- Fixed sporadic build failures of our Docker image, due to dependencies on the Polyverse package ( APK broken state). #5751 (archisgore)
- Fix segmentation fault in streaming, when two dimensions had similar names. #5882 (vlvkobal)
- Kubernetes Helm Chart: Fixed incorrect use of namespaces in ServiceAccount and ClusterRoleBinding RBAC fixes (varyumin).
- Elastic search: The option to enable HTTPS was not included in the config file, giving the erroneous impression that HTTPS was not supported. The option was added. [#5834] (#5834) (ilyam8)
- RocketChat notifications were not being sent properly. Added default recipients for roles in the health alarm notification configuration. #5545 (tctovsli)
Improvements
- go.d.plugin v0.4.0 : Docker Hub and k8s coredns collectors, springboot2 URI filters support.
- go.d.plugin v0.3.1 : Add default job to run k8s_kubelet.conf, k8s_kubeproxy, activemq modules
- go.d.plugin v0.3.0 : Docker engine, kubelet and kub-proxy collectors. x509check module reading certs from file support
- Added unified cgroup support that includes v2 cgroups #5407 (skrzyp1)
- Disk stats: Added preferred disk id pattern, so that users can see the id they prefer, when multiple ids appear for the same device #5779 (vlvkobal)
- NVIDIA SMI: Added memory free and per process memory usage charts to the collector #5796 (huww98)
- OpenLDAP: Added TLS support, to allow monitoring of LDAPS. #5859 (ekartsonakis)
- PHP-FPM: Add health check to raise alarms when the phpfm server is unreachable #5836 (ilyam8)
- PostgreSQL: Our configuration options to connect to a DB did not support all possible option. Added option to connect to a PostreSQL instance by defining a connection string (URI). #5758 (ilyam8)
- python.d.plugin: There was no way to delete obsolete dimensions in charts created by the python.d plugin. The plugin can now delete dimension at runtime. #5795 (ilyam8)
- netdata supports sending its logs to Syslog, but the facility was hard-coded. We now support configurable Syslog facilities in
netdata.conf. #5792 (thiagoftsm)
- We encountered sporadic failures of our kickstart installation scripts after nightly releases. We add integrity tests to our pipeline to ensure we prevent faulty scripts from getting deployed. #5778 (paulkatsoulakis)
- Kubernetes Helm Chart improvements: (cakrit) and (varyumin).
- Added serviceName in statefulset spec to align with the k8s documentation
- Added preStart command to persist slave machine GUIDs, so that pod deletion/addition during upgrades doesn't lose the slave history.
- Disabled non-essential master netdata collector plugins to avoid duplicate data
- Added preStop command to wait for netdata to exit gracefully before removing the container
- Extended configuration file support to provide more control from the helm command line
- Added option to disable Role-based access control
- Added liveness and readiness probes.
Assets
5
netdatabot released this
Release 1.13 contains 14 bug fixes and 8 improvements.
At a glance.
Acknowledgements:
- varyumin, who graciously shared the original Kubernetes Helm chart and is still helping improve it
- p-thurner for his great work on the SSL certificate expiration module.
- Ferroin for his priceless insights and assistance
- Jaxmetalmax for graciously helping us identify and fix postgress connection issues
Improvements
- Kubernetes: Helm chart () and proper cgroup naming #5576 (cakrit)
- python.d.plugin: Reduce memory usage with separate process for initial module checking #5552 (ilyam8) and loaders cleanup #5602 (ilyam8)
- IPC shared memory charts #5522 (vlvkobal)
- mysql module add ssl connection support #5610 (ilyam8)
- FreeIPMI: Have the debug option apply the internal freeipmi debug flags #5548 (cakrit)
- Prometheus backend: Support legacy metric names for source=avg #5531 (cakrit)
- Registry: Allow deleting the host we are looking at #5537 (cakrit)
- SpigotMC: Use regexes for parsing. #5507 (Ferroin)
Bug Fixes
- Postgres: fix connection issues #5618 (Jaxmetalmax), #5617 (ilyam8)
- Proxmox container: Fix cgroup naming #5612 (vlvkobal) and use total_* memory counters for cgroups #5592 (vlvkobal)
- proc.plugin and plugins.d: Fix memory leaks #5604 (vlvkobal)
- SpigotMC: Fix UnicodeDecodeError #5598 (ilyam8) and py2 compatibility fix #5593 (ilyam8)
- Fix non-obsolete dimension deletion #5563 (vlvkobal)
- UI: Fix incorrect icon for the streaming master #5560 #5561 (gmosx)
- Docker container names: Retry renaming when a name is not found #5557 (vlvkobal)
- apps.plugin: Don't send zeroes for empty process groups #5540 (vlvkobal)
- go.d.plugin: Correct sha256sum check #5539 (cakrit)
- Unbound module: Documentation corrected with troubleshooting section. #5528 (Ferroin)
- Streaming: Prevent UI issues upon GUID duplication between master and slave netdata instances #5511 (paulkatsoulakis)
- Linux power supply module: Fix missing zero dimensions #5395 (vlvkobal)
- Minor fixes around plugin_directories initialization #5536 (paulkatsoulakis)
Assets
5
netdatabot released this
Patch release 1.12.2 contains 7 bug fixes and 4 improvements.
At a glance.
Bug Fixes
- Installer at isnt updated to master branch #5492
- Zombie processes exist after restart netdata - add heartbeat to python.d plugin #5491
- Verbose curl output causes unwanted emails from netdata-updater cronjob #5484
- RocketChat notifications not working #5470
- go.d.plugin installation fails due to insufficient timeout #5467
- SIGSEGV crash during shutdown of tc plugin #5366
- CMake warning for nfacct plugin #5379
Improvements
Assets
5
netdatabot released this
Patch release 1.12.1 contains 22 bug fixes and 8 improvements.
Bug Fixes
- Fix SIGSEGV at startup: Don't free vars of charts that do not exist #5455
- Add timeouts to the installer for the go.d plugin and update the installer documentation for servers with no internet access.
- Prevent invalid Linux power supply alarms during startup #5447
- Correct duplicate flag enum in health.h #5441
- Remove extra 'v' for netdata version from Server response header #5440 and spec URL #5427
- Fix curl download in installer #5439
- apcupsd - Treat ONBATT status the same as ONLINE #5435
- Fix #5430 - LogService._get_raw_data under python3 fails on undecodable data #5431
- Correct version check in UI #5429
- Fix ERROR 405: Cannot download charts index from server - cpuidle handle newlines in names #5425
- Improve configure.ac mnl and netfilter_acc checks for static builds #5424
- Fix clock_gettime() failures with the CLOCK_BOOTTIME argument #5415
- Use netnsid for detecting cgroup networks; #5413
- Python module sensors fix #5406 (ilyam8)
- Fix kickstart-static64.sh script #5397
- Fix ceph.chart.py for Python3 #5396 (GaetanF)
- Added missing BuildRequires for autoconf, automake #5363
- Fix wget log spam in headless mode (fixes #5356) #5359
- Fix warning condition for mem.available #5353
- cups.plugin: Support older versions #5350
- Fix AC_CHECK_LIB to work correctly with cups library #5349
- Fix issues reported by Codacy
Improvements
- Add driver-type option to the freeipmi plugin #5384
- Add support of tera-byte size for Linux bcache. #5373
- Split nfacct plugin into separate process #5361
- Localization support in HTML docs, simplification of checklinks.sh #5342
- Cleanup updater script and no
/optusage #5218
- Add cgroup cpu and memory limits and alarms #5172
- Add message queue statistics #5115
- Documentation improvements
Assets
5
netdatabot released this
Assets
5
At a glance
Release 1.12 is made out of 211 pull requests and 22 bug fixes.
The key improvements are:
- Introducing
netdata.cloud, the free netdata service for all netdata users
- High performance plugins with go.d.plugin (data collection orchestrator written in Go)
- 7 new data collectors and 11 rewrites of existing data collectors for improved performance
- A new management API for all netdata servers
- Bind different functions of the netdata APIs to different ports
- Improved installation and updates 'Restore.
netdatabot released this
Assets
5
This is a patch - bug fix release of netdata.
Our work to move all the documentation inside the repo is still in progress. Everything has been moved, but still we need to refactor a lot of the pages to be more meaningful.
The README file on netdata home has been rewritten. Check it here.
Improved internal database
Overflown incremental values (counters) do not show a zero point at the charts. Netdata detects the width (8bit, 16bit, 32bit, 64bit) of each counter and properly calculates the delta when the counter overflows.
The internal database format has been extended to support values above 64bit.
New data collection plugins
openldap, to collect performance statistics from OpenLDAP servers.
tor, to collect traffic statistics from Tor.
nvidia_smito monitor NVIDIA GPUs.
Improved data collection plugins
- BUG FIX: network interface names with colon (
:) in them were incorrectly parsed and resulted in faulty data collection values.
- BUG FIX:
smartd_loghas been refactored, has better python v2 compatibility, and now supports SCSI smart attributes
cpufreqhas been re-written in C - since this module if common, we decided to convert to an internal plugin to lower the pressure on the python ones. There are a few more that will be transitioned to C in the next release.
- BUG FIX:
sensorsgot some compatibility fixes and improved handling for
lm-sensorserrors.
Health monitoring
- BUG FIX: max network interface speed data collection was faulty, which resulted in false-positive alarms on systems with multiple interfaces using different speeds (the speed of the first network interface was used for all network interfaces). Now the interface speed is shown as a badge:
alerta.ionotifications got a few improvements
BUG FIX:
conntrack_maxalarm has been restored (was not working due to an invalid variable name referenced)
Registry (
my-netdata menu)
It has been refactored a bit to reveal the URLs known for each node and now it supports deleting individual URLs.
Packaging
openrcservice definition got a few improvements
netdatabot released this
Assets
5
New to netdata? Check its demo:
Hi all,
It has been 8 months since the last release of Netdata. We delayed releases a bit, but as you can see on these release notes, we were working hard to provide the best Netdata ever.
Thanks to synacktiv.com and red4sec.com, we fixed a number of vulnerabilities in the code base (check below), so release 1.11 of Netdata is the most secure Netdata so far. All users are advised to update to this version asap.
Netdata now has its own organization on GitHub. So, we moved from
firehol/netdata to
netdata/netdata! We also provide new docker images as
netdata/netdata (the old ones are deprecated and are not updated any more).
Netdata community grows faster than ever. Currently netdata grows by +2k unique users and +1k unique installations per day, every day!
Contributions sky rocket too. To make it even easier for newcomers to get involved, we modularized all the code, now organized into a hierarchy of directories. We also moved most of the documentation, from the wiki into the repo. This is quite unique. Netdata is one of the first projects that organizes code and docs under the same hierarchy. Browse the repo; you will be surprised! Examples: data collection plugins, database, backends, web server, ARL, including benchmarks, etc.
Many thanks to all the contributors that help building, enhancing and improving a project useful and helpful to hundreds of thousands of admins, devops and developers around the world!
You rock!
Automatic Updates broken
There was an accidental breaking change in the master repo of netdata.
All users that use automatic updates, are advised to run:
sudo sh -c 'cd /usr/src/netdata.git && git fetch --all && git reset --hard origin/master && ./netdata-updater.sh -f'
After that,
netdata-updater will be able to update your netdata.
Stock config files are now in
/usr/lib/netdata
We prepare netdata for binary packages. This required stock config files to be overwritten unconditionally when new netdata binary packages are installed. So, all config files we ship with netdata are now installed under
/usr/lib/netdata/conf.d.
To edit config files, we have supplied the script
/etc/netdata/edit-config that automatically moves the config file you need to edit to
/etc/netdata and opens an editor for you.
New query engine
The query engine of netdata has been re-written to support query plugins. We have already added the following algorithms that are available for alarm, charts and badges:
stddev, for calculating the standard deviation on any time-frame.
sesor
emaor
ewma, for calculating the exponential weighted moving average, or single/simple exponential smoothing on any time-frame.
des, for calculating the double exponential smoothing on any time-frame.
cvor
rsd, for calculating the coefficient of variation for any time-frame.
Fixed Security Issues
Identified by Red4Sec.com
CVE-2018-18836Fixed JSON Header Injection (an attacker could send
\nencoded in the request to inject a JSON fragment into the response).
CVE-2018-18837Fixed HTTP Header Injection (an attacker could send
\nencoded in the request to inject an HTTP header into the response).
CVE-2018-18838Fixed LOG Injection (an attacker could send
\nencoded in the request to inject a log line at
access.log).
CVE-2018-18839Not fixed Full Path Disclosure, since these are intended (netdata reports the absolute filename of web files, alarm config files and alarm handlers).
Identified by Synacktiv
- Fixed Privilege Escalation by manipulating
apps.pluginor
cgroup-networkerror handling.
- Fixed LOG injection (by sending URLs with
\nin them).
Packaging
- Our official docker hub images are now at
netdata/netdata. These images are based on Alpine Linux for optimal footprint. We provide images for
i386,
amd64,
aarch64and
armhf.
- the supplied
netdata.servicenow allows configuring process scheduling priorities exclusively on
netdata.service(no need to change
netdata.conftoo).
- the supplied
netdata.serviceis now installed in
/usr/lib/systemd/system.
- Stock netdata configurations are now installed in
/usr/lib/netdata/conf.dand a new script has been added to allow easily copying and editing config files:
/etc/netdata/edit-config.
New Data Collection Modules
rethinkdbsfor monitoring RethinkDB performance
proxysqlfor monitoring ProxySQL performance
litespeedfor monitoring LiteSpeed web server performance.
uwsgifor monitoring uWSGI performance
unboundfor monitoring the performance of Unbound DNS servers.
powerdnsfor monitoring the performance of PowerDNS servers.
dockerdfor monitoring the health of dockerd
puppetfor monitoring Puppet Server and Puppet DB.
logindfor monitoring the number of active users.
adaptec_raidand
megaclifor monitoring the relevant raid controller
spigotmcfor monitoring minecraft server statistics
boincfor monitoring Berkeley Open Infrastructure Network Computing clients.
w1sensorfor monitoring multiple 1-Wire temperature sensors.
monitfor collecting process, host, filesystem, etc checks from monit.
linux_power_suppliesfor monitoring Linux Power Supplies attributes
Data Collection Orchestrators Changes
node.d.plugindoes not use the
jscommand any more.
python.d.pluginnow uses
monotonicclocks. There was a discrepancy in clocks used in netdata that resulted in a shift in time of python module after some time (it was missing 1 sec per day).
- added
MySQLServicefor quickly adding plugins using mysql queries.
URLServicenow supports self-signed certificates and supports custom client certificates.
- all
python.d.pluginmodules that require
sudoto collect metrics, are now disabled by default, to avoid security alarms on installations that do not need them.
Improved Data Collection Modules
apps.pluginnow detects changes in process file descriptors, also fixed a couple of memory leaks. Its default configuration has been enriched significantly, especially for IoT.
freeipmi.pluginnow supports option
ignore-statusto ignore the status reported by given sensors.
statsd.plugin (for collecting custom APM metrics)
- The charting thread has been optimized for lowering its CPU consumption when several millions of metrics are collected.
setsnow report zeros instead of gaps when no data are collected
histogramsand
timershave been optimized for lowering their CPU consumption to support several thousands of such metrics are collected.
histogramshad wrong sampling rate calculations.
gaugesnow ignore sampling rate when no sign is included in the value.
- the minimum sampling rate supported is now 0.001.
- netdata statsd is now drop-in replacement for datadog statsd (although statsd tags are currently ignored by netdata).
proc.plugin (Linux, system monitoring)
- Unused interrupts and softirqs are not used in charts (this saves quite some processing power and memory on systems with dozens of CPU cores).
- fixed
/proc/net/snmpparsing of
IcmpMsglines that failed on a few systems.
- Veritas Volume Manager disks are now recognized and named accordingly.
- Now netdata collects
TcpExtTCPReqQFullDropand re-organizes metrics in charts to properly monitor the TCP SYN queue and the TCP Accept queue of the kernel.
- Many charts that were previously reported as IPv4, where actually reflecting metrics for both IPv4 and IPv6. They have been renamed to
ip.*.
- netdata now monitors
SCTP.
- Fixed BTRFS over BCACHE sector size detection.
- BCACHE data collection is now faster.
/proc/interruptsand
/proc/softirqsparsing fixes.
diskspace.plugin (Linux, disk space usage monitoring)
- It does not
stat()excluded mount points any more (it was interfering with kerberos authenticated mount points).
- several filesystems are now by default excluded from disk-space monitoring, to avoid breaking suspend on workstations.
freebsd.plugin (FreeBSD, PFSense, system monitoring)
loundrymemory is now monitored.
system.netand
system.packetscharts added that report the total bandwidth and packets of all physical network interfaces combined.
python.d.plugin PYTHON modules (applications monitoring)
web_logmodule now supports virtual hosts, reports http/https metrics, support
squidlogs
nginx_plusmodule now handles non-continuous peer IDs (bug fix)
ipfsmodule is optimized, the use of its Pin API is now disabled by default and can enabled with a netdata module option (using the IPFS Pin API increases the load on the IPFS server).
fail2banmodule now supports IPv6 too.
cephmodule now checks permissions and properly reports issues
elasticsearchmodule got better error handling
nginx_plusmodule now uses upstream
ip:portinstead of transient id to identify dimensions.
redis, now it supports Pika, collects evited keys, fixes authentication issues reported and improves exception handling.
beanstalk, bug fix for yaml config loading.
mysql, the % of active connections is now monitored, query types are also charted.
varnish, now it supports versions above 5.0.0
couchdb
phpfpm, now supports IPv6 too.
apache, now supports IPv6 too.
icecast
mongodb, added support for connect URIs
postgress
elasticsearch, now it supports versions above 6.3.0, fixed JSON parse errors
mdstat, now collects
mismatch_cnt
openvpn_log
node.d.plugin NODE.JS modules
snmpwas incorrectly parsing a new OID names as float. Fixed it.
charts.d.plugin BASH modules
nutnow supports naming UPSes.
Health Monitoring
- Added variable
$system.cpu.processors.
- Added alarms for detecting abnormally high load average.
TCP
SYNand
TCPaccept queue alarms, replacing the old softnet dropped alarm that was too generic and reported many false positives.
- system alarms are now enabled on FreeBSD.
- netdata now reads NIC speed and sets alarms on each interface to detect congestion.
- Network alarms are now relaxed to avoid false positives.
- New
bcachealarms.
- New
mdstatalarms.
- New
apcupsdalarms.
- New
mysqlalarms.
- New notification methods:
- rocket.chat
- Microsoft Teams
- syslog
- fleep.io
- Amazon SNS
Backends
- Host tags are now sent to Graphite
- Host variables are now sent to Prometheus
Streaming
- Each netdata slave and proxy now filter the charts that are streamed. This allows exposing netdata masters to third parties by limiting the number of charts available at the master.
- Fixed a bug in streaming slaves that randomly prevented them to resume streaming after network errors.
- Fixed a bug that on slaves that sent duplicated chart names under certain conditions.
- Fixed a bug that caused slaves to consume 100% CPU (due to a misplaced lock) when multiple threads were adding dimensions on the same chart.
- The receiving nodes of streaming (netdata masters and proxies) can now rate-limit the rate of inbound streaming requests received.
- Re-worked time synchronization between netdata slaves and masters.
API
- Badges that report time, now show
undefinedinstead of
never.
Dashboard
- Added
UTCtimezone to the list of available time-zones.
- The dashboard was sending some non-HTTP compliant characters at the URLs that made netdata dashboards break when used under certain proxies. Fixed.
v1.10.0
firehol-automation released this
Assets
- 2.41 MB netdata-1.10.0.tar.bz2
- 455 Bytes netdata-1.10.0.tar.bz2.asc
- 57 Bytes netdata-1.10.0.tar.bz2.md5
- 153 Bytes netdata-1.10.0.tar.bz2.sha
- 2.7 MB netdata-1.10.0.tar.gz
- 455 Bytes netdata-1.10.0.tar.gz.asc
- 56 Bytes netdata-1.10.0.tar.gz.md5
- 152 Bytes netdata-1.10.0.tar.gz.sha
- 2.1 MB netdata-1.10.0.tar.xz
- 455 Bytes netdata-1.10.0.tar.xz.asc
- 56 Bytes netdata-1.10.0.tar.xz.md5
- 152 Bytes netdata-1.10.0.tar.xz.sha
- 5.35 MB netdata-latest.gz.run
- 56 Bytes netdata-latest.gz.run.md5
- 152 Bytes netdata-latest.gz.run.sha
- 5.35 MB netdata-v1.10.0-x86_64-20180327-195445.gz.run
- 80 Bytes netdata-v1.10.0-x86_64-20180327-195445.gz.run.md5
- 176 Bytes netdata-v1.10.0-x86_64-20180327-195445.gz.run.sha
- Source code (zip)
- Source code (tar.gz)
New to netdata? Check its demo:
Posted on twitter, facebook, reddit r/linux,
Hi all,
Another great netdata release: netdata v1.10.0 !
This is a birthday release: netdata is now 2 years old !
Many thanks to all the contributors that help building, enhancing and improving a project useful and helpful for thousands of admins, devops and developers around the world! You rock!
At a glance
netdata now has a new web server (called
static) with a fixed number of threads, providing a lot better performance and finer control of the resources allocated to it.
All dashboard elements (javascript) have been updated to their latest versions - this allows a smoother experience when embedding netdata charts on third party web sites and apps.
IMPORTANT: all users using older netdata are advised to update to this version. This version offers improved stability, security and a huge number of bug fixes, compared to any prior version of netdata.
new plugins
- BTRFS - monitor the allocations of BTRFS filesystems (yes, netdata can now properly detect when btrfs is going out of space)
- BCACHE - monitor the caching block layer that allows building hybrid disks using normal HDDs and SSDs
- Ceph - monitor ceph distributed storage
- nginx plus - monitor the nginx+ web servers
- libreswan - monitor IPSEC tunnels
- Traefik - monitor traefik reverse proxies
- icecast - monitor icecast streaming servers
- ntpd - monitor NTP servers
- httpcheck - monitor any remote web server
- portcheck - monitor any remote TCP port
- spring-boot - monitor java spring boot applications
- dnsdist - monitor dnsdist name servers
- hugepages - monitor the allocation of Linux hugepages
enhanced / improved plugins
- statsd
- web_log
- containers monitoring
- system memory
- diskspace
- network interfaces
- postgres
- rabbitmq
- apps.plugin
- haproxy
- uptime
- ksm
- mdstat
- elasticsearch
- apcupsd
- isc-dhcpd
- fronius
- stiebeleltron
new alarm notifications methods
- alerta
- IRC
And as always, hundreds more enhancements, improvements and bugfixes.
BTRFS monitoring
BTRFS space usage monitoring and related alarms.
netdata is able to detect if any of the space-related components (physical disk allocation, data, metdata and system) of BTRFS is about the become exhausted!
#3150 - thanks to @Ferroin for explaining everything about btrfs...
bcache monitoring
netdata now monitors bcache metrics - they are automatically added to any disk that is found to be a bcache disk.
ceph monitoring
New plugin to monitor ceph, the unified, distributed storage system designed for excellent performance, reliability and scalability (#3166 @lets00).
containers and VMs monitoring
- netdata now monitors
systemd-nspawncontainers.
- netdata now renames charts of kubernetes containers.
virshis now called with
-rto avoid prompting for password #3144
cgroup-networkis now a lot more strict, preventing unauthorized privilege escalation #3269
cgroup-networknow searches for container processes in sub-cgroups too - this improves the mapping of network interfaces to containers
cgroup-networknow works even when there are no
vethinterfaces in the system
monitor ntpd
netdata can now monitor isc-ntpd. @rda0 did a marvelous job decoding NTP Control Message Protocol, collecting ntpd metrics in the most efficient way #3421, #3454 @rda0
btw, netdata also monitors
chronybut the chrony module of netdata is disabled by default, because certain CentOS versions ship a version of chrony that consumes 100% cpu when queried for statistics.
nginx plus web servers monitoring
Added python plugin to monitor the operation of nginx plus servers. The plugin monitors everything about nginx+, except streaming #3312 @l2isbad
libreswan IPSEC tunnels monitoring
netdata now monitors libreswan tunnels - #3204
remote HTTP/HTTPS server monitoring
netdata now has an
httpcheck plugin (module of python.d.plugin), that can query remote http/https servers, track the response timings and check that the response body contains certain text #3448 @ccremer .
remote TCP port monitoring
netdata now has
portcheck plugin (module of python.d.plugin), that can check any remote TCP port is open #3447 @ccremer
icecast streaming server monitoring
netdata now monitors icecast servers #3511 @l2isbad.
traefik reverse proxy monitoring
netdata now monitors traefik reverse proxies - #3557.
spring-boot monitoring
netdata can now monitor java spring-boot applications @Wing924
dnsdist
netdata now monitors dnsdist name servers - @nobody-nobody #3009
statsd
- statsd dimensions now support the options the external plugin dimensions support (currently the only usable option is
hiddento add the dimension, but make it hidden on the dashboard - a hidden dimension can participate in various calculations, including alarms).
- statsd now reports the CPU usage of its threads at the netdata section.
- statsd metrics are logged to access.log the first time they are encountered.
- statsd metrics now accept the special value
zinitto allow them get initialized without altering their values (this is useful if you have rare metrics that you need to initialize when netdata starts).
- statsd over TCP is now a lot faster - netdata can process up to 3.5mil statsd metrics / second using just one core. Added options to control the timeouts of TCP statsd connections.
- fixed the title and context of statsd private charts
- statsd private charts can now be hidden from the dashboard #3467
postgres
Several new charts have been added to monitor (#3400 by @anayrat):
- checkpointer charts
- bgwriter charts
- autovacuum charts
- replication delta charts
- WAL archive charts
- WAL charts
- temporary files charts
Also, the postgres plugin now also works when postgres is in recovery mode.
rabbitmq
- added Erlang run queue chart. This is useful in conjunction with the existing Erlang processes chart to get a better overall idea of what's going on in the Erlang VM. @arch273
- added rabbitmq information on the dashboard to complement the charts.
apps.plugin
netdata prior to this version was detecting the user and group of processes by examining the ownership of
/proc/PID/stat. Unfortunately it seems that the owneship of files in
/proc do not change when the process switches user. So, netdata could not detect the user and group of processes that started as root and then switched to another user.
Now netdata reads
/proc/PID/status:
- process ownship information is now accurate
- eliminated the need to read
/proc/PID/statm(all the information of
/proc/PID/statmis available in
/proc/PID/status)
- allowed netdata to read
VmSwap, so a new chart has been added to monitor the swap memory usage per process, user and group.
- fixed issue with unreasonable spikes on processes cpu on FreeBSD (there was a typo) #3245
- fixed issue with errors reported on FreeBSD about pid 0 #3099
The new plugin is 20% more expensive in terms of CPU. We tried hard to optimize it, but this is as good as it can get. Read about it at #3434 and #3436
haproxy
Added charts:
- hrsp_1xx, hrsp_2xx, hrsp_3xx, hrsp_4xx, hrsp_5xx, hrsp_other, hrsp_total for backands and frontends
- qtime, ctime, rtime, ttime metrics for backend servers
- backend servers In UP state
uptime
netdata now uses
/proc/uptime when
CLOCK_BOOTTIME does not report the same uptime. In containers
CLOCK_BOOTTIME reports the uptime of the host, while
/proc/uptime reports the uptime of the container, so now netdata correctly reports the uptime of the container.
mdstat
various fixes to better monitor rebuild time and rate @l2isbad
KSM
- removed
to_scandimension
- the savings % reported by netdata was less than the actual - fixed it.
elasticsearch
Added several charts for translog / indices segments statistics and JVM buffer pool utilization, which are often helpful when evaluating an elasticsearch node health #3544 @NeonSludge
memory monitoring
- treat slab memory as cached #3288 @amichelic
- added a new chart for monitoring the memory available for use, before hitting swap
- netdata now monitors Linux hugepages and transparent hugepages
- added hugepages monitoring #3462
diskspace monitoring
- support huge amounts of mountpoints #3258 - netdata was crashing with stack overflow due to recursion - now it is loop, so any number of mount points is supported
network monitoring
- moved tcp passive and active opens to a separate chart, to allow the TCP issues dimensions scale better by default #3238
- updated the information presented on TCP charts to match the latest v4.15 kernel source #3239
APC UPS
netdata now supports monitoring multiple APC UPSes.
ISC DHCPd
netdata now also supports monitoring IPv6 leases - @l2isbad
fronius
stiebeleltron
web_log
Added web server response timings histogram #3558 @Wing924 .
python.d.plugin
- python.d.plugin can now start even if
/etc/netdata/python.d.confis missing @l2isbad
- python.d.plugin now has an internal run counter @l2isbad
- the unicode decoding of the plugin has been fixed (#3406) @l2isbad
- the plugin now does not validate self-signed certificates @l2isbad
- the plugin can not revive obsolete charts @l2isbad
charts.d.plugin
charts.d.plugin BASH modules can now have custom number of retries in case of data collection failures #3524.
web server
- netdata now has a new internal web server that supports a fixed number of threads - we call it
static web server. This web server allows netdata to work around memory fragmentation (since the treads are fixed, the underlying memory allocators reuse the same memory arenas) and cpu utilization (we can control the number of threads that will be used by netdata). This is the default now. #3248
- now the static threads web server reports the CPU usage of each of its threads.
- the HTTP response headers now include the netdata version
dashboard
the print button now respects the URL path netdata is hosted.
dygraphs updated to the latest version - this fixes an issue that prevented netdata charts from being interactive under certain conditions
added dygraph theme
logscale#3283
fontawesome updated to version 5
d3 updated to the latest version (this broke c3 charts that require an older version)
-
custom dashboards can now have alarms for specific roles (all, none, one or more).
allow stacked charts to zoom vertically when dimensions are selected
netdata now has a global XSS protection #3363
netdata now uses intersectionObserver when available #3280 - this improves the scrolling performance of the dashboard.
prevent date, time and units from wrapping at the charts legends #3286
various units scaling improvements #3285
added
data-common-colors="NAME"chart option for custom dashboards #3282.
added wiki page for creating custom dashboards on Atlassian's Confluence.
prevented a double click on the charts' toolbox to select the text of the buttons.
fixed the alignment of dashboard icons #3224 @xPaw
added a simple js, called refresh-badges.js, to update badges on a custom web page
badges
netdata badges can now be scaled #3474
API
- added
gtimeparameter, for group time. This is used to request from netdata to return values in a different rate (i.e.
gtime=60on a
X/secdimension, will return
X/min).
- fixed a rounding bug in JSON generation #3309
- the
dimensions=parameter now supports simple patterns #3170 and added option values
match-idsand
match-namesto control which matches are executed for dimensions.
alarms
system.swapalarms now send notifications with a 30 seconds delay, to work-around a kernel bug that incorrectly reports all swap as instantly used under containers #3380.
added alarm to predict the time a mount point will run out of inodes #3566.
all system alarms are now ported to FreeBSD too #3337 @arch273
added alerta.io notifications @kattunga
added available memory alarm
removed unsupported html tags from hipchat notifications.
pagerduty notifications have been modified to avoid incident duplication #3549.
alarm definitions can now use both chart IDs and chart names (prior to this version only chart IDs were allowed).
curloptions (eg for disabling SSL certificates verification) for
alarm-notify.shcan now be defined in
health_alarm_notify.conf.
netdata can now send notifications to IRC channels #3458 @manosf
backends
- on netdata masters, allow filtering the hosts that will be sent to backends with
send hosts matching = *pattern.
- improved connection error handling and added retries to allow netdata connect to certain backends that failed with
EALREADYor
EINPROGRESS.
- json backends now receive
host tags(the tags have to be formatted in a json friendly way) #3556.
- re-worked the alarm that triggers when backend data are lost, to avoid flip-flops.
prometheus backends
- added URL option
timestamps=yes|noto
/api/v1/allmetricsto support prometheus Pushgateway #3533
- added
netdata_infovariable with the version of netdata
- renamed
netdata_host_tagsto
netdata_host_tags_info(the old exists but is deprecated and will be removed eventually)
- when prometheus uses
averagemetrics, netdata remembers the last access time the prometheus collected metrics, on a per host basis.
metrics streaming between netdata
- netdata masters and proxies now expose the version of the netdata collecting the metrics, not their own. So, now a netdata master shows on the dashboard and sends to backends the version of the netdata collecting the metrics #3538.
- added
stream.confoption
multiple connections = accept | denyto allow or deny multiple connection for the same netdata host. The default remains
accept, but it is likely to be changed to
noon future versions.
packaging
- added docker hub builds for aarch64/arm64 @justin8
- updated debian containers to use stretch @justin8
- added FreeBSD init file
- various installers fixes and improvements (make sure netdata is started, do not give information about features not supported on each operating system, allow non-root installations without errors, etc.)
- various installer fixes for FreeBSD and MacOS
netdata-updaterwas growing the
PATHvariable on each of its runs - fixed it.
- added
--acceptand
--dont-start-itcommand line options to
kickstart-static64.sh
- netdata can be compiled with
long doublesupport (useful in embedded devices that don't support long double numbers) #3354
- fixed
netdata.specto allow building netdata on older and newer rpm based distros. Also added a script to build a netdata rpm
- static netdata installer now tries to find the location of the SSL ca-certificates on a system and properly configured the static
curlprovided with this path.
- the netdata updater starts netdata only if it was running
- added alpine dockerfile
other
- added global option
gap when lost iterationsto control the number of iterations that should be lost to show a gap on the charts.
- various fixes/improvements related to netdata logs - the main change is that now netdata logs the thread name that logged the message, providing helpful insights about the thread that complained.
- re-worked the exit procedure of netdata to allow it cleanup properly - sometimes netdata was deadlocked during exit, waiting forever - now netdata always exits promptly #3184
- fixed compilation on ancient gcc versions
- netdata was always setting itself to the
idleprocess scheduling priority, even when it was configured to do otherwise. Fixed it #3523
1be9200
Compare
Netdata v1.22.1
Release v1.22.1 is a hotfix release to address issues related to packaging and how Agents connect to Netdata Cloud..
For Netdata Cloud, we optimized the on-connect payload sent through the Agent-Cloud link to improve latency between Agents and Cloud. We also removed a check for old alarm status when sending alarms to Cloud via the ACLK.
Finally, we made a fix that ensures Agents running on systems using the musl C library can receive auto-updates.
Bug fixes
|
https://www.ctolib.com/article/releases/1504
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
Python NumPy Searching is a method in python and numpy searching is a library of Python programming language that adds support for large, multi-dimensional arrays and matrices that add support for large, multi-dimensional arrays and matrices. Along with it, it adds a large collection of high-level mathematical functions to operate on the arrays
In this article, we will learn about the techniques we use for Python NumPy searching.
A NumPy array stores similar types of elements in a continuous structure. We have seen so many times that there is a need to look at the maximum and minimum elements of the arrays at a dynamic run time. NumPy provides us with a set of functions that enables us to search for the specific elements having certain conditions applied to them.
How to Search NumPy Arrays
- argmax() function: With this function, it becomes easy to fetch and display the index of the maximum element present in the array structure. By this, the index of the largest elements is a result value from the argmax() function.
- NumPy nanargmax() function: With the nanargmax() function, one can easily deal with the NAN or NULL values present in the array. It does not get treated differently. There will be no effect on the functioning of the search values of the NAN values. The syntax will be:
numpy.nanargmax()
In the example given, the array elements contain a NULL value passed using the numpy.NaN function. Now we use nanargmax() function to search NumPy arrays and find the maximum value from the array elements without letting the NAN elements affect the search.
import numpy as np x = np.array([[40, 10, 20,np.nan,-1,0,10],[1,2,3,4,np.nan,0,-1]]) y = np.nanargmax(x) print(x) print("Max element's index:", y)
The output will be:
[[40. 10. 20. nan -1. 0. 10.] [ 1. 2. 3. 4. nan 0. -1.]] Max element's index: 0
- NumPy argmin() function : With the argmin() function, we can easily search NumPy arrays and fetch the index of the smallest elements that are present in the array at a border scale. It searches for the smallest value present in the array structure and returns the index of the same value. Therefore, with the index, it becomes easy to get the smallest element present in the array. The syntax will be:
numpy.argmin() function
import numpy as np x = np.array([[40, 10, 20,11,-1,0,10],[1,2,3,4,5,0,-1]]) y = np.argmin(x) print(x) print("Min element's index:", y)
The output will be:
[[40 10 20 11 -1 0 10] [ 1 2 3 4 5 0 -1]] Min element's index: 4 >
In the example given below, there are two indexes that cover the lowest element that is [-1]. The argmin() function returns the index of the first occurrence of the smallest elements from the array values.
- NumPy where() function: This function easily searches NumPy arrays for the index values of any element. It matches the condition passed as the parameter of the function.
- NumPy nanargmin() function: This function helps you to search NumPy arrays. It gets easy to find the index of the smallest value present in the array elements. The user doesnt have to worry about the NAN values present in them. The NULL values have a zero effect on the search of the elements.
|
https://www.developerhelps.com/python-numpy-searching/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Authentication #
Kuzzle handles authentication in a generic way with a strategy system that can be added via plugins.
These strategies are responsible for managing user credentials and verifying them during the authentication phase.
Plugins have a secure storage space accessible only from the plugin code.
This space is used to store sensitive information such as user credentials.
Learn more about Writing Plugins.
Each user can then use one of the available strategies to authenticate himself.
Kuzzle User IDentifier (kuid) #
Users are identified by a unique identifier called the Kuzzle User IDentifier or
kuid.
This is for example the
kuid found in the Kuzzle Metadata of documents:
{ "name": "jenow", "age": 32, "_kuzzle_info": { "creator": "c2eaced2-c388-455a-b018-940b68cbb5a2", "createdAt": 1605018219330, "updater": "940b940b6-c388-554a-018b-ced28cbb5a2", "updatedAt": 1705018219330 } }
The
kuid is auto-generated unless it is passed to the security:createUser API action:
kourou security:createUser '{ content: { profileIds: ["default"] }, credentials: { local: { username: "my", password: "password" } } }' --id mylehuong
Credentials #
In Kuzzle, user credentials are composed of a list of authentication strategies and their respective profile data.
They must be provided at the creation of a user in the
credentials property of the user's content passed in the
body of the query.
Example: Create an user with
local credentials
kourou security:createUser '{ content: { profileIds: ["default"] }, credentials: { local: { username: "mylehuong", password: "password" } } }'
They will then be stored by the plugin in charge of the
local strategy in a secure storage space accessible only from the plugin's scope.
It is possible to manipulate a user's credentials:
- security:getCredentials: retrieve credentials information for a strategy
- security:createCredentials: create new credentials for another strategy
- security:deleteCredentials: delete credentials for a strategy
When a user wants to authenticate to Kuzzle, they must choose a strategy and then provide the information requested by it.
For instance, the
local strategy requires a
username and a
kourou auth:login -a strategy=local --body '{ username: "mylehuong", password: "password" }'
Authentication Token #
Authentication is performed using the auth:login API action.
This action requires the name of the strategy to be used as well as any information necessary for this strategy.
When authentication is successful, Kuzzle returns an authentication token. This token has a validity of 1 hours by default, then it will be necessary to ask for a new one with either auth:refreshToken or auth:login.
It is possible to request an authentication token valid for more than 1 hours with the argument
expiresIn.
The default validity period is configurable under the key
security.jwt.expiresIn.
It is also possible to set a maximum validity period for a token under the key
security.jwt.maxTTL.
Possible values:
<= -1: disable the use of maxTTL
>= 0: enable maxTTL with setted value (
0will invalid all your authentication tokens at their creation)
For historical reasons the API terminology uses the term
jwt but Kuzzle authentication tokens only have in common with JSON Web Tokens the algorithms used to generate and verify them.
Authentication tokens are revocable using the auth:logout API action.
It's also possible to revoke every authentication tokens of a user with the security:revokeTokens.
Authentication Token Expiration #
Authentication tokens expire after a defined period of time. Once an authentication token has expired, it cannot be used in any way.
If the customer has subscribed to real-time notifications then they will be notified at the time of expiration with a TokenExpired server notification.
While an authentication token is still valid, it is possible to provide it to the auth:refreshToken API action to request a new, fresher authentication token, without having to ask for credentials.
local Strategy #
The
local strategy allows users to authenticate with a
username and a
Those information must be passed to the auth:login API action body:
kourou auth:login -a strategy=local --body '{ username: "mylehuong", password: "password" }'
Authentication Token in the Browser #
When you're sending HTTP requests from a browser you can instruct Kuzzle
to
load and
store authentication tokens within an HTTP Cookie.
This is possible thanks to the option cookieAuth in auth:login, auth:logout, auth:checkToken, auth:refreshToken
You can disable the cookie authentication by setting
http.cookieAuthentication to
false in Kuzzle Configuration.
local Strategy Configuration #
The strategy can be configured under the
plugins.kuzzle-plugin-auth-passport-local configuration key.
{ "plugins": { // [...] "kuzzle-plugin-auth-passport-local": { // one of the supported encryption algorithms // (run crypto.getHashes() to get the complete list). "algorithm": "sha512", // boolean and controlling if the password is stretched or not. "stretching": true, // describes how the hashed password is stored in the database // "digest": "hex", // determines whether the hashing algorithm uses crypto.createHash (hash) // or crypto.createHmac (hmac). // "encryption": "hmac", // if true, kuzzle will refuse any credentials update or deletion, // unless the currently valid password is provided // or if the change is performed via the security controller "requirePassword": false, // a positive time representation of the delay after which a // reset password token expires (see ms for possible formats). "resetPasswordExpiresIn": -1, // set of additional rules to apply to users, or to groups of users "passwordPolicies": [] } } }
Password Policies #
Password policies can be used to define a set of additional rules to apply to users, or to groups of users.
Each password policy is an object with the following properties:
appliesTo: (mandatory). can be either set to the
*to match all users, or an object.
appliesTo.users: an array of user
kuidsthe policy applies to.
appliesTo.profiles: an array of
profileids the policy applies to.
appliesTo.roles: an array of
roleids the policy applies to.
At least one of
users,
profiles or
roles properties must be set if
appliesTo is an object.
Optional properties #
expiresAfter: the delay after which a password expires (see ms for possible formats). Users with expired passwords are given a
resetPasswordTokenwhen logging in and must change their password to be allowed to log in again.
forbidLoginInPassword: if set to
true, prevents users to use their username in part of the password. The check is case-insensitive.
forbidReusedPasswordCount: the number of passwords to store in history and checked against when a new password is set to prevent passwords reuse.
mustChangePasswordIfSetByAdmin: if set to
true, whenever a password is set for a user by someone else, that user will receive a
resetPasswordTokenupon their next login and they will have to change their password before being allowed to log in again.
passwordRegex: a string representation of a regular expression to test on new passwords.
Example:
No user can use a password that includes the login and the password must be at least 6 chars long.
Editors and admin users passwords expire every 30 days and the password must be at least 8 chars long and include at least one letter and one digit.
Admin users passwords must either be 24 or more chars long, or include a lower case char, an upper case char, a digit and a special char.
{ "passwordPolicies": [ { "appliesTo": "*", "forbidLoginPassword": true, "passwordRegex": ".{6,}" }, { "appliesTo": { "profiles": ["editor"], "roles": ["admin"] }, "expiresAfter": "30d", "mustChangePasswordIfSetByAdmin": true, "passwordRegex": "^(?=.*[a-zA-Z])(?=.*[0-9])(?=.{8,})" }, { "appliesTo": { "roles": ["admin"] }, "passwordRegex": "^(((?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*\\W)(?=.{8,}))|(?=.{24,}))" } ] }
oauth Strategy #
This plugin allows to authenticate with OAuth providers such as Facebook, Twitter, etc by using Passport.js OAuth2.
This plugin is not shipped by default with Kuzzle and must be installed via NPM:
npm install kuzzle-plugin-auth-passport-oauth
Then you need to instantiate it and use it within your application:
import PluginOAuth from 'kuzzle-plugin-auth-passport-oauth'; import { Backend } from 'kuzzle'; const app = new Backend('tirana'); app.plugin.use(new PluginOAuth());
This strategy allows to create users in Kuzzle if they don't already exist when they login for the first time.
oauth Strategy Configuration #
Once installed, the OAuth plugin can be configured under the
plugins.kuzzle-plugin-auth-passport-oauth configuration key.
Here is an example of a configuration:
{ // List of the providers you want to use with passport "strategies": { "facebook": { // Strategy name for passport (eg. google-oauth20 while the name of the provider is google) "passportStrategy": "facebook", // Credentials provided by the provider "credentials": { "clientID": "<your-client-id>", "clientSecret": "<your-client-secret>", "callbackURL": "", "profileFields": ["id", "name", "picture", "email", "gender"] }, // Attributes you want to persist in the user credentials object if the user doesn't exist "persist": [ "picture.data.url", "last_name", "first_name", "email" ], // List of fields in the OAUTH 2.0 scope of access "scope": [ "email", "public_profile" ], //Mapping of attributes to persist in the user persisted in Kuzzle "kuzzleAttributesMapping": { // will store the attribute "email" from oauth provider as "userEmail" into the user credentials object "userMail": "email" }, // Attribute from the profile of the provider to use as unique identifier if you want to persist the user in Kuzzle "identifierAttribute": "email" } }, // Profiles of the new persisted user "defaultProfiles": [ "default" ] }
identifierAttribute
This attribute will be used to identify your users. It has to be unique.
You need to choose an attribute declared in the
persist array.
Attributes Persistence
Attributes declared in the
persist array will be persisted in the credentials object and not in the user content.
For example, if you have the following configuration:
{ "strategies": { "facebook": { "persist": ["email", "first_name", "picture.data.url"], "kuzzleAttributesMapping": { "picture.data.url": "avatar_url" } } } }
And your OAuth provider will send you the following
_json payload:
{ "email": "gfreeman@black-mesa.xen", "first_name": "gordon", "last_name": "freeman", "picture": { "data": { "url": "http://..." } } }
The created user content will be:
{ "content": { "profileIds": ["default"] }, "credentials": { "facebook": { "email": "gfreeman@black-mesa.xen", "first_name": "gordon", "avatar_url": "http://..." } } }
|
https://doc.kuzzle.io/core/2/guides/main-concepts/authentication/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
is it possible in C++ to receive a word(String) as input and add that string in a statement (if -else or anything ) like python . giving a python code below num1 = int(input("Enter the first number ")) num2 = int(input("Enter the second number ")) print("Type Sum for calculate sum ") print("Type Sub for calculate ..
Category : user-input
Quite simply, I have passed a large string as an input to GMP’s mpz_class constructor, and the value is a different integer. These values were acquired through vs code’s debugger. From the main file: User john(1024, "340282366920938463463370103832140841039", "340282366920938463463370103832140841051", 17); The User constructor: User::User(const int k, std::string p, std::string q, const int e) { this->m_k = ..
I have tried so many things to try and get this working. I am very new to dynamic SQL, and the Internet doesn’t seem to be helping me very much. I am trying to get this query working, but I don’t understand how to implement the ? with taking in any of the input. I ..
I’m facing a bug where, after taking in the user input from a while loop, my code does not accept the last value. This bug happens on ONE specific example, and I have no clue why this is happening. So, for example, the user inputs: 7 3 1 4 0 0 2 0 The output ..
So I am making a tic tac toe game, user vs pc with a simple random generator. What I was planning was to make a counter of how many times the user inputs a choice on the tic tac toe board. That way I can generate a random number, and place it in a vector, ..
#include <iostream> #include "multiplication.h" #include "subtraction.h" using namespace std; int main() { multiplication out; subtraction out2; int x, y, z; int product; int difference; cout << "Enter two numbers to multiply by: "; cin >> x; cin >> y; product = out.mult(); cout << "the product is: " << product; cout << "Now enter a ..
I am trying to program a school bulletin (I’m sorry if that’s not the right word, what I meant with "bulletin" is the thing were are the grades of each student. English isn’t my 1° language) and I want to ask the user the name of the student and then create a int student_name; so ..
Recently I have been starting to participate in c++ contests but I cannot find the best way to handle user input when given in this format. E.g. 4 and 3 are the dimensions of the next block of input 4 3 1 2 4 5 1 6 7 4 1 5 0 0 The problem ..
I want to use Octal/Hexadecimal numbers in my program. Is there any way to directly input an Octal/Hexadecimal number in C++? Source: Windows Que..
In my case, I have to make sure the user input is either 1 or 2, or 3. Here’s my code: #include <iostream> using namespace std; void invalid_choice_prompt() { string msg = "nInvalid Command! Please try again."; cout << msg << endl; } int ask_user_rps_check_input(int user_choice) { if (user_choice == 1 || user_choice == 2 ..
Recent Comments
|
https://windowsquestions.com/category/user-input/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Optimism ProviderOptimism Provider
Note: It is recommended to use a normal Web3 provider for now unless you
need to access the additional properties that the Optimism Node attaches
to RPC responses or the
eth_sign based transaction signing.
The full geth RPC is supported, and most users will not need these
additional fields.
The
OptimismProvider extends the ethers.js
JsonRpcProvider and
implements all of the same methods. It will sign transactions using
eth_sign
and submit transactions to the Optimism Sequencer through a new endpoint
eth_sendRawEthSignTransaction. It needs a
Web3Provider based provider
to manage keys for any transaction signing.
UsageUsage
import { OptimismProvider } from '@eth-optimism/provider' import { Web3Provider } from '@ethersproject/providers' // Uses a Web3Provider to manage keys, pass in `window.ethereum` or // another key management backend. const web3 = new Web3Provider() // Accepts either a URL or a network name (main, kovan) const provider = new OptimismProvider('', web3)
Goerli TestnetGoerli Testnet
To connect to the Goerli testnet:
const provider = new OptimismProvider('goerli')
|
https://www.npmjs.com/package/@eth-optimism/provider
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
testfixturestestInstallation
First, import it like this:
import ( "github.com/go-testfixtures/testfixtures/v3" )
UsageUsage { ... } fixtures, err := testfixtures.NewFiles(db, &testfixtures.PostgreSQL{}, "fixtures/orders.yml", "fixtures/customers.yml", // add as many files you want )SecuritySequences
For PostgreSQL,Compatible databases
PostgreSQL / TimescaleDB / CockroachDBPostBMySQL / MariaDB
Just make sure the connection string have the multistatement parameter set to true, and use:
testfixtures.New( ... testfixtures.Dialect("mysql"), // or "mariadb" )
Tested using the github.com/go-sql-driver/mysql driver.
SQLiteSQLMicrosoftTemGenerating"), textGotchas
Parallel testingParallelICLI
We also have a CLI to load fixtures in a given database.
Grab it from the releases page or install with Homebrew:
brew install go-testfixtures/tap/testfixtures
Usage is like this:
testfixtures -d postgres -c "postgres://user:[email protected]/database" -D testdata/fixtures
The connection string changes for each database driver.
Use
testfixtures --help for all flags.
ContributingContributingAlternatives
GitHub
|
https://golangexample.com/ruby-on-rails-like-test-fixtures-for-go-write-tests-against-a-real-database/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Python/C API: Reference Counting
5th semester is finally over and let’s just say I have been to the dark side. Moving on. The most important aspect of Python is memory management. As mentioned in the earlier post of this series, PyObject* is a pointer to a weird data type representing an arbitrary Python object. All Python objects have a “type” and a “reference count” and they live on heap so you don’t screw with them and only pointer variables cab be declared. I had skipped the part of referencing. In this post, I’ll talk about Memory Management in Python and Reference Counts.
So What is Reference Counting? Python’s memory management is based on reference counting. Every Python object has a count of the number of references to the object. When the count becomes zero, the object can be destroyed and its memory reclaimed. Reference counts are always manipulated explicitly using macros Py_INCREF() to increment the reference count and Py_DECREF to decrement it by one. The decref macro is considerably more complex than the incref one, since it must check whether the reference count becomes zero and then cause the object’s deallocator, which is a function pointer contained in the object’s type structure.
Background:
PyObject:
Python objects are structures allocated on the heap.
Accessed through pointers of type PyObject*
An object has a reference count that is increased or decreased when a pointer to the object is copied or deleted.
When the reference count reaches zero there are no references to the object left and it can be removed from the heap.
Py_INCREF() and Py_DECREF():
The macros Py_INCREF(op) and Py_DECREF(op) are used to increment or decrement reference counts.
Py_DECREF() also calls the object’s deallocator function; for objects that don’t contain references to other objects or heap memory this can be the standard function free()
The argument shouldn’t be a NIL pointer.
To prevent Memory Leaks, corresponding to each call to Py_INCREF(), there must be a call to Py_DECREF().
Owned vs Borrowed: Every PyObject pointer is either owned or borrowed. An owned reference means you are responsible for correctly disposing of the reference. Objects are not owned, they are all shared. It’s the references to the objects that are owned or borrowed.
A borrowed reference implies that some other piece of code(function) owns the the reference, because the code’s interest started before yours, and will end before yours. So you must not deallocate the pointer or decrease the reference. Otherwise, crash! A caller must have a reference to the arguments it passes into a called function, so arguments are almost always borrowed.
Acquiring Owned References: There are two ways to get an owned reference:
Accept a return value from a C function that returns a PyObject. Most C API functions that return PyObject return a new reference, but not all. Some return the borrowed reference. Read the docs carefully. Python/C API functions that borrow references - StackOverflow.
Use Py_INCREF on a borrowed PyObject pointer you already have. This increments the reference count on the object, and obligates you to dispose of it properly.
Discarding Owned References: Once you have an owned reference, you have to discard it properly. There are three ways.
Return it to the caller of your function. This transfers the ownership from you to your caller. Now they have an owned reference.
Use Py_DECREF() to decrement the reference count.
Store it with PyTuple_SetItem() or PyList_SetItem(), which are unusual among C API functions: they steal ownership of their item argument. Python/C API functions that steal reference - StackOverflow.
Code Example: So I have been working on my pointer skills and created a simple selection sort to sort a list of integers. Here’s the code. Include Libraries:
#include <Python.h> #include <stdio.h> #include <stdlib.h>
C Function: PyArg_ParseTuple works like scanf and returns 1 if successful. So, here I’m trying to check whether the passed argument(one argument only) is a PyObject (list) or not and similar to scanf, storing the value of the argument in list (PyObject*).
PyObject* py_selectionSort(PyObject* self, PyObject* args) { PyObject* list; if (!PyArg_ParseTuple(args, "O", &list)) { printf("Not a listn"); return NULL; } // continued }
NOTE: list is an object we pull out of the args with PyArg_ParseTuple. Since args is borrowed, we can borrow value out of it, so list is also borrowed.
Create array from list: I am creating an integer array from the Python list.
PyObject* py_selectionSort(PyObject* self, PyObject* args) { // continued.. PyObject* list_item; Py_ssize_t i, len; len = PyList_Size(list); long* list_array = (long*) malloc(len*(sizeof(long))); /* create array from list */ for(i=0; i<len; i++) { list_item = PyList_GetItem(list, i); if PyInt_Check(list_item) { *list_array = PyInt_AsLong(list_item); list_array++; } // Py_DECREF(list_item); No need to decrease the reference count // as PyList_GetItem returns a borrowed reference. } list_array-=len; // continued.. }
@franksmit pointed out the mistake with borrowed reference.
Here list_item is an owned reference, so it is your responsibility to discard it properly using Py_DECREF.
Sorting: This is just C part. You can skip this part if you know how to sort stuff.
PyObject* py_selectionSort(PyObject* self, PyObject* args) { // continued.. int min = *list_array; int index = 0; int l; int min_element, temp_element; for (j=0; j<len; j++) { list_array+=j; min = *list_array; index = j; for (k=j; k<len; k++) { if (*list_array < min) { min = *list_array; index = k; } list_array++; } list_array = list_array - len; if (index != j) { temp_element = *(list_array + j); *(list_array + j) = *(list_array+ index); *(list_array + index) = temp_element; } } // continued .. }
Create list from the sorted array: After sorting the integer array, I want a list (PyObject*) which I can return.
PyObject* py_selectionSort(PyObject* self, PyObject* args) { // continued.. /* create list from sorted array */ PyObject* flist = PyList_New(len); for (i=0; i<len; i++) { list_item = PyInt_FromLong(*list_array); PyList_SetItem(flist, i, list_item); list_array++; //Py_DECREF(list_item); /* PyList_SetItem steals the reference */ } list_array-=len; // continued .. }
NOTE:Here we’re not to use Py_DECREF to decrement the reference count of list_item because PyList_SetItems() steals the reference.
Cleaning Up and Return: Memory management is very important in any Python program hence you need to clean up to avoid memory leaks and crashes.
PyObject* py_selectionSort(PyObject* self, PyObject* args) { // continued .. /* make sure that list_array points to the first allocated block of the array * else, free() will cause a segmentation fault. */ free(list_array); /* list is an object we pull out of the args with PyArg_ParseTuple. * Since args is borrowed, we can borrow value out of it, * so list is also borrowed. */ //Py_DECREF(list); return flist; }
NOTE: list is an object we pull out of the args with PyArg_ParseTuple. Since args is borrowed, we can borrow value out of it, so list is also borrowed. list_array must point to the first allocated block of the array while using free() to deallocate it.
Afterpart: Declare PyMethodDef and the module.
PyMethodDef methods[] = { {"selectionSort",(PyCFunction)py_selectionSort, METH_VARARGS, NULL}, {NULL, NULL,0,NULL}, }; PyMODINIT_FUNC initselectionSort(void) { Py_InitModule3("selectionSort", methods, "Extension module example!"); }
You can find/fork code on my GitHub.
References: Python/C API docs A Whirlwind Excursion through Python C Extensions - Ned Batchelder Ed’s Eclectic Science Page Numerous StackOverlfow questions.
P.S. Winter break is on. I need to speed things up and start with Mahotas and other Computer Vision work. I have to make some notes, presentation slides and gather all my CV work for the next semester.
Playing around with Android UI
Articles focusing on Android UI - playing around with ViewPagers, CoordinatorLayout, meaningful motions and animations, implementing difficult customized views, etc.
|
https://jayrambhia.com/blog/pythonc-api-reference-counting
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
From: Greg Colvin (gcolvin_at_[hidden])
Date: 1999-07-29 09:30:48
Sorry to be nitpicking so much here. It may not be that important what
convention Boost uses, but it is important to follow a convention, or
else some combination of command line switches, environment variables,
and where you compile from could suck in an inconsistent set of Boost
headers.
From: Beman Dawes <beman_at_[hidden]>
> ....
>
> I went back and read both the C standard and C rationale, and there
> is no indication that <...> is for system files, and "..." is for
> user files unless you want to deduce that from some rationale
> discussion of early C implementation search paths.
Yep -- its a strrtch I know. From K&R, first edition, page 86:
To facilitate handling collections of #defines and declarations
(among other things) C provides a file inclusion feature. Any
line that looks like
#include "filename"
is replaced by the contents of the file filename. (The quotes
are mandatory.) ...
No mention in this section of angle brackets, which appear on page
143 in the Input and Output chapter:
Each source file that refers to a standard library function must
contain the line
#include <stdio.h>
somewhere near the beginning. ... Use of the angle brackets < and >
instead of the usual double quotes directs the compiler to search
for the file in a directory containing standard header information
(on Unix, typically /usr/include).
So users are taught to use the quoted for their own files and the
bracketed form for standard files. And despite the name usr,
users typically do not copy their own files into /usr/include,
since it is shared by other users of the system.
> > I had thought that the intent was
> >that:
> > <name> forms were intended for standard and system C++ headers;
> > <name.h> forms were intended for standard and system C headers;
> > "whatever" forms were intended for user source files;
> >where "headers" need not be files at all.
>
> Well, neither standard says anything about "intended for standard and
> system headers" or "intended for user source files".
I see the Standard's careful use of the term "file" for the quoted
form and "header" for the bracketed form as hinting back at this
convention, while struggling not to actually require a file system.
If anyone has access to an OS 360/370 compiler I'd be interested in
what the rules are there, since as I recall that system has a notion
"file" very different from the Unix/CPM/MSDOS tradition.
> The convention I have seen used a lot is that <...> is for headers
> that contain public interfaces to a library, while "..." is for
> headers which are part of the library's implementation and not part
> of its public interface. That, coupled with the implementation
> directory not being in the <...> path, keeps a user from
> inadvertently including an implementation header which isn't part of
> the public interface.
Yes, this is a reasonable convention, to the extent that a library is
seen as "system" code.
> Don't take my word of it. Look at various public libraries. I just
> grepped SGI's STL library, and it uses the <...> form for all
> headers, system or otherwise.
As does Oracle's C source code. But before Oracle I never used angle
brackets to include any source written by me or my company.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/1999/07/0454.php
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Perl 5 By Example
Chapter 15
Perl Modules
- Module Constructors and Destructors
- Symbol Tables
- The require Compiler Directive
- The use Compiler Directive
- What's a Pragma?
- The strict Pragma
- The Standard Modules
- strict, my() and Modules
- Module Examples
- Summary
- Review Questions
- Review Exercises:
- The file name should be the same as the package name.
- The package name should start with a capital letter.
- The file name should have a file extension of pm.
- The package should be derived from the Exporter class if object-oriented techniques are not being used.
- The module should export functions and variables to the main namespace using the @EXPORT and @EXPORT_OK arrays if object-oriented techniques are not being used..
Module Constructors and Destructors
You.
The BEGIN Block
The BEGIN block is evaluated as soon as it is defined. Therefore, it can include other functions using do() or require statements. Since the blocks are evaluated immediately after definition, multiple BEGIN blocks will execute in the order that they appear in the script.
Define a BEGIN block for the main package.
Display a string indicating the begin block is executing.
Start the Foo package.
Define a BEGIN block for the Foo package.
Display a string indicating the begin block is executing.
Listing 15.1 15LST01.PL-Using BEGIN Blocks
BEGIN { print("main\n"); } package Foo; BEGIN { print("Foo\n"); }
This program displays:
main Foo
The END Block
The END blocks are the last thing to be evaluated. They are even evaluated after exit() or die() functions are called. Therefore, they can be used to close files or write messages to log files. Multiple END blocks are evaluated in reverse order.
Listing 15.2 15LST02.PL-Using END Blocks
END { print("main\n"); } package Foo; END { print("Foo\n"); }
This program displays:
Foo Main
Symbol Tables
Each::. Listing 15.3 shows a program that displays all of the entries in the Foo:: namespace.
Define the dispSymbols() function.
Get the hash reference that should be the first parameter.
Declare local temporary variables.
Initialize the %symbols variable. This is done to make the code easier to read.
Initialize the @symbols variables. This variable is also used to make the code easier to read.
Iterate over the symbols array displaying the key-value pairs of the symbol table.
Call the dispSymbols() function to display the symbols for the Foo package.
Start the Foo package.
Initialize the $bar variable. This will place an entry into the symbol table.
Define the baz() function. This will also create an entry into the symbol table.
Listing 15.3 15LST03.PL-How to Display the Entries in a Symbol Table.
The require Compiler Directive
The.
The use Compiler Directive
When."
What's a Pragma?.
Listing 15.4 15LST04.PL-Using the integer Pragma;
Table 15.1 shows a list of the pragmas that you can use.
The strict Pragma
The most important pragma is strict. This pragma generates compiler errors if unsafe programming is detected. There are three specific things that are detected:
- Symbolic references
- Non-local variables (those not declared with my()) and variables that aren't fully qualified.
- Non-quoted words that aren't subroutine names or file handles.
Symbolic references use the name of a variable as the reference to the variable. They are a kind of shorthand widely used in the C programming language, but not available in Perl. Listing 15.5 shows a program that uses symbolic references.
Declare two variables.
Initialize $ref with a reference to $foo.
Dereference $ref and display the result.
Initialize $ref to $foo.
Dereference $ref and display the result.
Invoke the strict pragma.
Dereference $ref and display the result.
Listing 15.5 15LST05.PL-Detecting Symbolic References
my($foo) = "Testing."; my($ref); $ref = \$foo; print("${$ref}\n"); # Using a real reference $ref = $foo; print("${$ref}\n"); # Using a symbolic reference use strict; print("${$ref}\n");
When run with the command perl 15lst05.pl, this program displays:
Testing. Can't use string ("Testing.") as a SCALAR ref while "strict refs" in use at 15lst05.pl line 14.
The second print statement, even though obviously wrong, does not generate any errors. Imagine if you were using a complicated data structure such as the ones described in Chapter 8 "References." You could spend hours looking for a bug like this. After the strict pragma is turned on, however, a runtime error is generated when the same print statement is repeated. Perl even displays the value of the scalar that attempted to masquerade as the reference value.
The strict pragma ensures that all variables that are used are either local to the current block or they are fully qualified. Fully qualifying a variable name simply means to add the package name where the variable was defined to the variable name. For example, you would specify the $numTables variable in package Room by saying $Room::numTables. If you are not sure which package a variable is defined in, try using the dispSymbols() function from Listing 15.3. Call the dispSymbols() function once for each package that your script uses.
The last type of error that strict will generate an error for is the non-quoted word that is not used as a subroutine name or file handle. For example, the following line is good:
$SIG{'PIPE'} = 'Plumber';
And this line is bad:
$SIG{PIPE} = 'Plumber';
Perl 5, without the strict
pragma, will do the correct thing in the bad situation and assume
that you meant to create a string literal. However, this is considered
bad programming practice.
The Standard Modules
Table 15.2 lists the modules that should come with all distributions
of Perl. Some of these modules are not portable across all operating
systems, however. The descriptions for the modules mention the
incompatibility if I know about it.
strict, my() and Modules
In order to use the strict pragma with modules, you need to know a bit more about the my() function about how it creates lexical variables instead of local variables. You may be tempted to think that variables declared with my() are local to a package, especially since you can have more than one package statement per file. However, my() does the exact opposite; in fact, variables that are declared with my() are never stored inside the symbol table..
Module Examples
This section shows you how to use the Carp, English, and Env modules. After looking at these examples, you should feel comfortable about trying the rest.
Example: The Carp Module
This. Confused? So was I, until I did some experimenting. The results of that experimenting can be found in Listing 15.6.
Load the Carp module.
Invoke the strict pragma.
Start the Foo namespace.
Define the foo() function.
Call the carp() function.
Call the croak() function.
Switch to the main namespace.
Call the foo() function.
Listing 15.6 15LST06.PL-Using the carp() and croak() from the Carp Module.
Load the Carp module.
Invoke the strict pragma.
Call foo().
Define foo().
Call bar().
Define bar().
Call baz().
Define baz().
Call Confess().
Listing 15.7 15LST07.PL-Using confess() from the Carp Module.
Example: The English Module
The English module is designed
to make your scripts more readable. It creates aliases for all
of the special variables that were discussed in Chapter 12, "Using
Special Variables." Table 15.3 lists all of the aliases that
are defined. After the table, some examples show you how the aliases
are used.
Listing 15.8 shows a program that uses one of the English variables to access information about a matched string.
Load the English module.
Invoke the strict pragma.
Initialize the search space and pattern variables.
Perform a matching operation to find the pattern
in the $searchSpace variable.
Display information about the search.
Display the matching string using the English variable names.
Display the matching string using the standard Perl special variables.
Listing 15.8 15LST01.PL-Using the English Module
use English; use strict; my($searchSpace) = "TTTT BBBABBB DDDD"; my($pattern) = "B+AB+"; $searchSpace =~ m/$pattern/; print("Search space: $searchSpace\n"); print("Pattern: /$pattern/\n"); print("Matched String: $English::MATCH\n"); # the English variable print("Matched String: $&\n"); # the standard Perl variable
This program displays
Search space: TTTT BBBABBB DDDD Pattern: /B+AB+/ Matched String: BBBABBB Matched String: BBBABBB
You can see that the $& and $MATCH variables are equivalent. This means that you can use another programmer's functions without renaming their variables and still use the English names in your own functions.
Example: The Env Module
If you use environment variables a lot, then you need to look at the Env module. It will enable you to directly access the environment variables as Perl scalar variables instead of through the %Env hash. For example, $PATH is equivalent to $ENV{'PATH'}.
Load the Env module.
Invoke the strict pragma.
Declare the @files variable.
Open the temporary directory and read all of its files.
Display the name of the temporary directory.
Display the names of all files that end in tmp.
Listing 15.9 15LST09.PL-Displaying Temporary Files Using the Env Module
use Env; use strict; my(@files); opendir(DIR, $main::TEMP); @files = readdir(DIR); closedir(DIR); print "$main::TEMP\n"; foreach (@files) { print("\t$_\n") if m/\.tmp/i; }
This program displays:
C:\WINDOWS\TEMP ~Df182.TMP ~Df1B3.TMP ~Df8073.TMP ~Df8074.TMP ~WRS0003.tmp ~Df6116.TMP ~DFC2C2.TMP ~Df9145.TMP
This.
Summary
In this chapter, you learned about Perl modules. You read about several guidelines that should be followed when creating modules. For example, package name should have their first letter capitalized and use file extensions of pm.. Env provides aliases for environmental variables so that you can access them directly instead of through the %Env hash variable.
In the next chapter, you learn about debugging Perl code. You read about syntax or compile-time errors versus runtime errors. The strict pragma will be discussed in more detail.
Review Questions
Answers to Review Questions are in Appendix A.
- What is a module?
- How is a module different from a library?
- What is the correct file extension for a module?
- What is a pragma?
- What is the most important pragma and why?
- What does the END block do?
- What is a symbol table?
- How can you create a variable that is local to a package?
Review Exercises
- Write a program that uses BEGIN and END blocks to write a message to a log file about the start and end times for the program.
- Use the English module to display Perl's version number.
- Modify the dispSymbols() function from Listing 15.3 to display only function and variable names passed as arguments.
- Execute the program in Listing 15.5 with the -w command line option. Describe the results.
- Write a module to calculate the area of a rectangle. Use the @EXPORT array to export the name of your function.
|
http://www.webbasedprogramming.com/Perl-5-By-Example/ch15.htm
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
The Switch component is used as an alternative for the Checkbox component. You can switch between enabled or disabled states.
import { Switch } from '@nextui-org/react';
You can change the state with
checked prop
Unusable and un-clickable
Switch.
Change the size of the entire
Switch including
padding and
border with the
size property.
You can change the color of the
Switch with the property color.
You can add a shadow effect with the property
shadow.
You can change the full style towards a squared
Switch with the
squared property.
You can change the full style towards a bodered
Switch with the
bordered property.
You can disable the animation of the entire
Switch with the property
animated={false}.
NextUI doesn't use any library or icon font by default, with this we give the freedom to use the one you prefer. In the following example we use Boxicons
type NormalColors = | 'default' | 'primary' | 'secondary' | 'success' | 'warning' | 'error' | 'gradient';
type NormalSizes = 'xs' | 'sm' | 'md' | 'lg' | 'xl';
interface SwitchEvent { target: SwitchEventTarget; stopPropagation: () => void; preventDefault: () => void; nativeEvent: React.ChangeEvent; }
interface SwitchEventTarget { checked: boolean; }
|
https://nextui.org/docs/components/switch
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
One common observation about Scheme, and more generally about Lisp, is that it's insular. Critics agree that while Scheme is simple and elegant, it can be hard to get it to do anything useful in a commercial sense. One reason for this is that most Schemes support or bind to common APIs clumsily at best. The Scheme tradition has properly emphasized portability, intellectual rigor, and extensibility. However, this has often led to isolation from a particular platform's specific features and a tendency to reinvent the wheel.
Matthias Felleisen, self-described iconoclast and head of the Programming Languages Team at Rice University, is aware of this. He explains that the prospect of Scheme's commercial adoption distresses even some people that argue for it; according to Felleisen, they would be unhappy at no longer being able to change their favorite languages and experiment with new constructs, type systems, or module systems.
One quick way to bridge the API gap is to exploit Java's rich library and reflection capabilities. Interpretively combining Java and Scheme allows the Java APIs to multiply Schemers' productivity. The union also gives Java programmers a more congenial environment for experimental development, rapid prototyping, and automated class testing. Moreover, Scheme and its relatives remain the programming languages of choice in areas where Java is weak, such as artificial intelligence and advanced work in program verification.
Silk is an interpreter that combines Java and Scheme. It relies on naming conventions to make Java APIs visible in Scheme. Look at how "Hello, world" in Silk takes advantage of the Java Abstract Windowing Toolkit (AWT):
(define (init thisApplet)
(main ()))
(define (main ShellArgs)
(define win (java.awt.Frame. "Hello World"))
(define quit (java.awt.Button. "quit"))
(.add win "Center" (java.awt.Label. "Hello World"))
(.add win "South" quit)
(.addActionListener quit (Listener11. (lambda(e) (.hide win))))
(.pack win)
(.show win)
Silk executes this script directly in a Java virtual machine (JVM); a developer need not stop to invoke a byte-code compiler. Java reflection gives Silk the ability to query variables and immediately execute Java functions. The user enters a function name, and the JVM executes a sequence synthesized by reflection directly from the symbolic reference, instead of from the kind of byte-code object file that javac produces.
javac
Silk can also construct new Java classes, access Java variables, and execute Java functions through its naming convention. A Silk script to create a Java object and refer to it by the name of its Java source variable is as simple as:
(define str (java.lang.String. "hello"))
(.length str)
The . appended to the end of the String class name tells the Silk interpreter to construct a new class instance by invoking the String constructor. String takes a string variable as its argument. The . prefixed to length signals the Silk interpreter to execute the Java method length with the argument str.
.
String
length
str
Silk handles user-defined classes with equal facility. Compile:
package myclass;
public class SchemeAsOne extends Object {
};
This is visible within Silk as:
(define scheme_class (myclass.SchemeAsOne. )).
Silk in Java
All Scheme functions can be accessed within Java by using Scheme handlers. Within Java source code, for example, you might invoke:
Symbol start = Symbol.intern(start);
Symbol canvas=Symbol.intern(canvas,
silk.Scheme.eval(new Pair(start,
new Pair( new Pair(canvas, new Pair (this, Pair.EMPTY)))));
One clever use of Silk is to execute Scheme source through a URL stream:
silk.Scheme.load(new InputPort(new URL(file:localprogram).openStream()));
Because the entire Silk interpreter takes up only 50 KB, it's practical to represent Scheme source as Java applets embedded in Webpages.
<html>
<APPLET code="silk.SchemeApplet"
codeBASE="."
WIDTH="100" HEIGHT="50"
ARCHIVE="silk.jar">
<param name="prog" value="http:HelloWorld.silk">
<param name="init" value="init">
</APPLET>
</html>
silk.jar contains the Scheme interpreter. In this example, the Scheme source is in the filename HelloWorld.silk local to the server. It receives SchemeApplet as a parameter.
silk.jar
HelloWorld.silk
SchemeApplet
Three programmers currently maintain Silk: Ken Anderson, Tim Hickey, and Peter Norvig. Norvig submitted a technical report on Silk to the Workshop on Scheme and Functional Programming 2000, which we mentioned in one of October's columns. Hickey, an associate professor of computer science at Brandeis University, teaches courses on Silk.
A decade ago, Scheme was often positioned as a competitor for C and C++, and more recently, to Java. Now it is poossible for Scheme to complement Java and C++. Silk is the best-positioned candidate to hybridize the procedural and functional strengths of Java and Scheme, respectively.
Learning from seniors
Several other readers wrote that Scheme deserves more attention from developers, if only because it's already weathered storms that other languages are only beginning to feel. We often mention Python, for example, and have encouraged it as an ideal first language. Python originated from educational research, and its Computer Programming for Everybody (CP4E) project has received considerable publicity. Still, classrooms have relied on Scheme for over 15 years, a record that dwarfs Python's.
Similarly, we've lauded Christian Tismer's Stackless Python. Many of the engineering choices involved in this reimplementation of Python can be traced directly to work done with Scheme (and with Icon and Forth) about a decade ago. Tismer is a rigorous thinker who recognizes that he stands on the shoulders of others.
For those not familiar with the scholarship of programming languages, we're happy "to give a little credit to people who've spent 20 years on a problem," as Shriram Krishnamurthi of Brown University pleads. Krishnamurthi also said that for the Scheme workshop, "We expected about half as many people as showed." Scheme is attracting more attention, and deservedly so.
|
http://www.itworld.com/AppDev/4123/swol-1117-regex-dl/
|
crawl-001
|
en
|
refinedweb
|
Using Alerts, Images, Timers, and Gauges in MIDlets
By Richard G. Baldwin
Java Programming Notes # 2580
Preface
Viewing tip
Figures
Listings
Supplementary material
General background information
The Alert class
The AlertType class
The Image class
The Gauge class
The Timer class
The TimerTask class
Preview
Discussion and sample code
The MIDlet named Alert01
The MIDlet named Alert02
Run the programs
Summary
What's next?
Resources
Complete program listings
About the author
Java Programming Notes # 2580:
What you will learn in this lesson
In this lesson you will. constructor for the Alert class
As of MIDP 2.0, there are two overloaded constructors for the Alert class. The constructor that I will use in this lesson takes four parameters:.
In the remainder of this lesson, I will present and explain two MIDlets named Alert01 and Alert02. The primary differences between the two will be in the areas of alert type and Gauge mode.
The purpose of this MIDlet is to illustrate:
Each time the Alert becomes visible, it obscures a TextBox object that is also being displayed by the MIDlet. When the Alert disappears, the TextBox reappears.
Requirements
This MIDlet requires:.
public class Alert01 extends MIDlet{
Alert01 theMidlet;
Image image;
int count = 0;
long baseTime;
public Alert01(){
System.out.println("Construct MIDlet");
theMidlet = this;
baseTime = new Date().getTime()/1000;
}//end constructor.
public void startApp(){
System.out.println("Create and display a TextBox");
TextBox textBox = new TextBox("TextBox Title",
"TextBox contents",
50,//width
TextField.ANY);
//Make the TextBox the current Displayable object.
Display.getDisplay(this).setCurrent(textBox);.
Timer myTimer = new Timer();
myTimer.schedule(new MyTimerTask(),2000,3000);.
//Sleep for 20 seconds.
try{Thread.currentThread().sleep(20000);
} catch(Exception e){}
//Cancel the timer.
myTimer.cancel();
//Enter the destroyed state.
this.destroyApp(true);
}//end startApp.
public void pauseApp(){
}//end pauseApp
public void destroyApp(boolean unconditional){
System.out.println("Destroy MIDlet");
notifyDestroyed();
}//end destroyApp.
class MyTimerTask extends TimerTask{
long time;
public void run(){
System.out.println("Display an Alert");
try{
//Select among two image files on the basis of
// whether the current time in seconds is odd
// or even.
time = new Date().getTime()/1000 - baseTime;
//Note that the following file names are case
// sensitive.
if((time % 2) == 0){//Even value
image = Image.createImage(
"/Alert01/redball.PNG");
}else{//Odd value
image = Image.createImage(
"/Alert01/blueball.PNG");
}//end else.
Alert alert = new Alert("Alert Title",
"",
image,
AlertType.ALARM);
//Cause the alert to display the time in seconds.
alert.setString("Time in seconds:" + time);
//Cause the alert to be visible for two seconds.
alert.setTimeout(2000);.
Gauge gauge = new Gauge(null,false,6,0);
//Set the number of Gauge bars to be illuminated.
gauge.setValue(++count);
//Attach the Gauge to the alert.
alert.setIndicator(gauge);.
Display.getDisplay(theMidlet).setCurrent(alert);
}catch(Exception e){
e.printStackTrace();
}//end catch
}//end run
}//end class MyTimerTask
}//end class Alert01
Listing 9 also signals the end of the run method, the end of the member class named MyTimerTask, and the end of the MIDlet class named Alert01.
As mentioned earlier, this MIDlet is very similar to the MIDlet named Alert01. Therefore, I will confine my explanation to the code that is different between the two and the results imparted by those code differences..
//Create an Alert object of type CONFIRMATION.
// This results in an audible alert that is three
// chimes.
Alert alert = new Alert("Alert Title",
"",
image,
AlertType.CONFIRMATION);.
Gauge gauge = new Gauge(
null,
false,
Gauge.INDEFINITE,
Gauge.INCREMENTAL_UPDATING);.
gauge.setValue(++count % 3);
The remaining code in the MIDlet named Alert02 is the same as the code in the MIDlet named Alert01.
I encourage you to copy the code from Listing 13, Listing 14, and Listing 15. Run the two MIDlets in the updated MIDlet development framework named WTKFramework03 that is provided Listing 13..
You will also need two small image files. You can substitute any image files containing small images for the two image files listed above. You will have to make the names of your image files match the references to the image files in the code (see Listing 6).
In this lesson you learned
In the next. Finally, you will learn how to create a List, how to display it in the Sun cell phone emulator, and how to determine which elements in the List are selected.
Complete listings of the programs discussed in this lesson are shown in Listing 13, Listing 14, and Listing 15 below:
Listing 13. The updated MIDlet development framework named WTKFramework03.
/*WTKFramework03.java
Updated: December 17, 2007
Version: WTKFramework03.java
Upgraded to prevent the deletion of image files and other
resource files when the program cleans up after itself.
This results in resource files being included in the JAR
file. The resource files should be in the same directory
as the source files.
Version: WTKFramework02.java
Upgraded to capture and display standard output and error
output from child processes.
Also upgraded to allow user to enter MIDlet name on the
command line. This is particularly useful when repeatedly
running this program from a batch file during MIDlet
development.
Version: WTKFramework01.java, which
are required for the deployment of the MIDlet program.
Given a file containing the source code for the MIDlet,
a single click of the mouse causes this framework to
automatically cycle through the following steps:
Compilation (targeted to Java v1.4 virtual machine)
Pre-verification
Creation of the manifest file
Creation of the JAR file
Creation of the JAD file
Deletion of extraneous files, saving the JAR and JAD files
Deployment and execution in Sun's cell phone emulator
The MIDlet being processed must be stored in a folder
having the same name as the main MIDlet class. The
folder containing the MIDlet must be a child of the
folder in which the framework is being executed.
Note: When you transfer control to a new process window by
calling the exec method, the path environment variable
doesn't go along for the ride. Therefore, you must
provide the full path for programs that you call in that
new process.
Tested using Java SE 6 and WTK2.5.2 running under
Windows XP.
*********************************************************/
import java.io.*;
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
public class WTKFramework03{
String toolkit = "M:/WTK2.5.2";//Path to toolkit root
String vendor = "Dick Baldwin";//Default vendor name
String midletVersion = "1.0.0";
String profile = "MIDP-2.0";
String profileJar = "/lib/midpapi20.jar";
String config = "CLDC-1.1";
String configJar = "/lib/cldcapi11.jar";
//Path to the bin folder of the Java installation
String javaPath = "C:/Program Files/Java/jdk1.6.0/bin";
String prog = "WTK001";
int initialCleanupOK = 1;//Success = 0
int compileOK = 1;//Compiler success = 0
int preverifyOK = 1;//Preverify success = 0
int deleteClassFilesOK = 1;//Delete success = 0
int moveFilesOK = 1;//Move success = 0
int manifestFileOK = 1;//Manifest success = 0
int jarFileOK = 1;//Jar file success = 0
int jadFileOK = 1;//Jad file success = 0
int cleanupOK = 1;//Cleanup success = 0
long jarFileSize = 0;
JTextField progName;
JTextField WTKroot;
JTextField vendorText;
JTextField midletVersionText;
JTextField javaPathText;
JRadioButton pButton10;
JRadioButton pButton20;
JRadioButton pButton21;
JRadioButton cButton10;
JRadioButton cButton11;
static WTKFramework03 thisObj;
//----------------------------------------------------//
public static void main(String[] args){
//Allow user to enter the MIDlet name on the command
// line. Useful when running from a batch file.
thisObj = new WTKFramework03();
if(args.length != 0)thisObj.prog = args[0];
thisObj.new GUI();
}//end main
//----------------------------------------------------//
void runTheProgram(){
//This method is called when the user clicks the Run
// button on the GUI.
System.out.println("PROGRESS REPORT");
System.out.println("Running program named: " + prog);
//This code calls several methods in sequence to
// accomplish the needed actions. If there is a
// failure at any step along the way, the
// framework will terminate at that point with a
// suitable error message.
//Delete leftover files from a previous run, if any
// exist
deleteOldStuff();
if(initialCleanupOK != 0){//Test for success
System.out.println("Initial cleanup error");
System.out.println("Terminating");
System.exit(1);
}//end if
compile();//compile the MIDlet
if(compileOK != 0){//Test for successful compilation.
System.out.println("Terminating");
System.exit(1);
}//end if
preverify();//Pre-verify the MIDlet class files
if(preverifyOK != 0){
System.out.println("Terminating");
System.exit(1);
}//end if
//Delete the class files from the original program
// folder
deleteClassFilesOK = deleteProgClassFiles();
if(deleteClassFilesOK != 0){
System.out.println("Terminating");
System.exit(1);
}//end if
//Move the preverified files back to the original
// program folder
movePreverifiedFiles();
if(moveFilesOK != 0){
System.out.println("Terminating");
System.exit(1);
}//end if
//Make manifest file
makeManifestFile();
if(manifestFileOK != 0){
System.out.println("Manifest file error");
System.out.println("Terminating");
System.exit(1);
}//end if
//Make Jar file
makeJarFile();
if(jarFileOK != 0){
System.out.println("JAR file error");
System.out.println("Terminating");
System.exit(1);
}//end if
//Make Jad file
makeJadFile();
if(jadFileOK != 0){
System.out.println("Terminating");
System.exit(1);
}//end if
//Delete extraneous files
cleanup();
if(cleanupOK != 0){
System.out.println("Terminating");
System.exit(1);
}//end if
//Run emulator
runEmulator();
//Reset success flags
initialCleanupOK = 1;//Success = 0
compileOK = 1;//Compiler success = 0
preverifyOK = 1;//Preverify success = 0
deleteClassFilesOK = 1;//Delete success = 0
moveFilesOK = 1;//Move success = 0
manifestFileOK = 1;//Manifest success = 0
jarFileOK = 1;//Jar file success = 0
jadFileOK = 1;//Jad file success = 0
cleanupOK = 1;//Cleanup success = 0
//Control returns to here when the user terminates
// the cell phone emulator.
System.out.println(
"\nClick the Run button to run another MIDlet.");
System.out.println();//blank line
}//end runTheProgram
//----------------------------------------------------//
//Purpose: Delete leftover files at startup
void deleteOldStuff(){
System.out.println(
"Deleting leftover files from a previous run");
//Delete subdirectory from output folder if it exists.
int successFlag = deleteOutputSubDir();
//Delete manifest file if it exists.
File manifestFile = new File("output/Manifest.mf");
if(manifestFile.exists()){
boolean success = manifestFile.delete();
if(success){
System.out.println(" Manifest file deleted");
}else{
successFlag = 1;
}//end else
}//end if
//Delete old JAR file if it exists.
File jarFile = new File("output/" + prog + ".jar");
if(jarFile.exists()){
boolean success = jarFile.delete();
if(success){
System.out.println(" Old jar file deleted");
}else{
successFlag = 1;
}//end else
}//end if
//Delete old JAD file if it exists.
File jadFile = new File("output/" + prog + ".jad");
if(jadFile.exists()){
boolean success = jadFile.delete();
if(success){
Sy
|
http://www.developer.com/java/j2me/article.php/3736301
|
crawl-001
|
en
|
refinedweb
|
ElementTree Overview
“But I have found that sitting under the ElementTree, one can feel the Zen of XML.”
— Essien Ita Essien
Update 2007-09-12: ElementTree 1.3 alpha 3 is now available. For more information, see Introducing ElementTree 1.3.
Update 2007-08-27: ElementTree 1.2.7 preview is now available. This is 1.2.6 plus support for IronPython. The serializer is ~20% faster, and now supports newlines in attribute values..
There’s also an independent implementation, lxml.etree, based on the well-known libxml2/libxslt libraries. This adds full support for XSLT, XPath, and more.
For more implementations and add-ons, see the Interesting Stuff section below.
Installation #
Binary installers are available for many platforms, including Windows, Mac OS X, and most Linux distributions. Look for packages named “python-elementtree” or similar.
To install from source, simply unpack the distribution archive, change to the distribution directory, and run the setup.py script as follows:
$ python setup.py install
When you’ve done this, you should be able to import the ElementTree module, and other modules from the elementtree package:
$ python >>> from elementtree import ElementTree
It’s common practice to import ElementTree under an alias, both to minimize typing, and to make it easier to switch between different implementations:
$ python >>> import elementtree.ElementTree as ET >>> import cElementTree as ET >>> import lxml.etree as ET >>> import xml.etree.ElementTree as ET # Python 2.5
Note that if you only need the core functionality, you can include the ElementTree.py file in your own project. To get path support, you also need ElementPath.py. All other modules are optional.
Basic Usage #
Each Element instance can have an identifying tag, any number of attributes, any number of child element instances, and an associated object (usually a string). To create elements, you can use the Element or Subelement factories:
import elementtree.ElementTree as ET # build a tree structure root = ET.Element("html") head = ET.SubElement(root, "head") title = ET.SubElement(head, "title") title.text = "Page Title" body = ET.SubElement(root, "body") body.set("bgcolor", "#ffffff") body.text = "Hello, World!" # wrap it in an ElementTree instance, and save as XML tree = ET.ElementTree(root) tree.write("page.xhtml")
The ElementTree wrapper adds code to load XML files as trees of Element objects, and save them back again. You can use the parse function to quickly load an entire XML document into an ElementTree instance:
import elementtree.ElementTree as ET tree = ET.parse("page.xhtml") # the tree root is the toplevel html element print tree.findtext("head/title") # if you need the root element, use getroot root = tree.getroot() # ...manipulate tree... tree.write("out.xml")
For more details, see Elements and Element Trees.
Documentation #
Zone articles:
Elsewhere:
Andrew Dalke: IterParseFilter: XPath-like filtering of ElementTree’s iterparse event stream
Andrew Dalke: PyProtocols for output generation
Martijn Faassen: lxml and (c)ElementTree
Andrew Kuchling: Processing XML with ElementTree [slides from a talk]
Danny Yoo: ElementTree mini-tutorial [“Let’s work through a small example with it; that may help to clear some confusion.“]
Joseph Reagle: XML ElementTree Data Model
Uche Ogbuji: Simple XML Processing With elementtree [xml.com]
David Mertz: Process XML in Python with ElementTree: How does the API stack up against similar libraries? [ibm developerworks]
Uche Ogbuji: Python Paradigms for XML
Uche Ogbuji: XML Namespaces Support in Python Tools, Part Three [xml.com]
Uche Ogbuji: Practical SAX Notes: ElementTree, Namespaces and Techniques for Large Documents [xml.com]
Interesting stuff built with (or for) ElementTree (selection):
L. C. Rees: webstring (webstring is a web templating engine that allows programs to manipulate XML and HTML documents with standard Python sequence and string operators. It is designed for those whose preferred web template languages are Python and HTML (and XML for people who swing that way).
Chris McDonough: meld3 (an XML templating system for Python 2.3+ which keeps template markup and dynamic rendering logic separate from one another, based on PyMeld)
Peter Hunt: pymeld4 (another ET-based implementation of the PyMeld templating language)
Seo Sanghyeon: pyexpat/ElementTree for IronPython (a pyexpat emulation for IronPython which lets you use the standard ElementTree module on that platform)
Oren Tirosh: ElementBuilder (friendly syntax for constructing ElementTree:s)
Staffan Malmgren: lagen.nu (a nicely formatted, hyperlinked, linkable, and taggable version of the entire body of swedish law) (more information)
Ralf Schlatterbeck: OOoPy (a tool to inspect, create, and modify OpenOffice.org documents in Python)
Martijn Faassen: lxml (ElementTree-compatible bindings for libxml2 and libxslt).
Martin Pool, et al: Bazaar-NG (version management system)
Seth Vidal, Konstantin Ryabitsev, et al: Yellow dog Updater, Modified (an automatic updater and package installer/remover for rpm systems)
Michael Droettboom: pyScore (a set of Python-based tools for working with symbolic music notation)
Ryan Tomayko: Kid (a template language)
Ken Rimey: PDIS XPath (a more complete XPath implementation)
Roland Leuthe: minixsv (a lightweight XML schema validator written in pure Python)
Bruno da Silva de Oliveira, Joel de Guzman: Pyste (a Python binding generator for C++)
Works in progress:
- ElementTree: Working with Qualified Names
- Using the ElementTree Module to Generate Google Requests
- A Simple Technorati Client
- Using Element Trees to Parse WSDL Files
- Using Element Trees to Parse XBEL Files
- Using ElementTrees to Generate XML-RPC Messages
- Generating Tkinter User Interfaces from XML
- A Simple XML-Over-HTTP Class
- You Can Never Have Too Many Stock Tickers!
Comment:
On some Linux systems, notably Debian-based systems, you'll need to have the Python2.3-dev (or Python2.4-dev) package installed in order to be able to compile C extensions.
Posted by Berco (2006-11-17)
|
http://www.effbot.org/zone/element-index.htm
|
crawl-001
|
en
|
refinedweb
|
|< Windows Registry Tutorial 3 | Main | Windows Share Programming 1 >| Site Index | Download | Disclaimer | Privacy |
MODULE P1
WINDOWS OS
.:: REGISTRY: EXAMPLES AND EXPLOITS::.
PART 4
// #define _WIN32_WINNT 0x0502 // Windows Server 2003 family
// For Win Xp, change accordingly...
#define _WIN32_WINNT 0x0501
// #define _WIN32_WINNT 0x0500 // Windows 2000
// #define _WIN32_WINNT 0x0400 // Windows NT 4.0
// #define _WIN32_WINDOWS 0x0500 // Windows ME
// #define _WIN32_WINDOWS 0x0410 // Windows 98
// #define _WIN32_WINDOWS 0x0400 // Windows 95
#include <windows.h>
#include <stdio.h>
// Change accordingly...
#define POLICY_KEY TEXT("Software\\Policies\\Microsoft\\Windows\\Explorer")
#define PREFERENCE_KEY TEXT("Software\\Microsoft\\Windows\\CurrentVersion\\Explorer")
DWORD ReadValue(LPTSTR lpValueName, DWORD dwDefault)
{
HKEY hKey;
LONG lResult;
DWORD dwValue, dwType, dwSize = sizeof(dwValue);
// First, check for a policy.
lResult = RegOpenKeyEx(HKEY_CURRENT_USER, POLICY_KEY, 0, KEY_READ, &hKey);
if(lResult == ERROR_SUCCESS)
{
lResult = RegQueryValueEx(hKey, lpValueName, 0, &dwType, (LPBYTE)&dwValue, &dwSize);
RegCloseKey(hKey);
}
// Exit if a policy value was found.
if(lResult == ERROR_SUCCESS)
{
// return the data value
return dwValue;
}
else
printf("Policy: value not found!\n");
// Second, check for a preference.
lResult = RegOpenKeyEx(HKEY_CURRENT_USER, PREFERENCE_KEY, 0, KEY_READ, &hKey);
if(lResult == ERROR_SUCCESS)
{
lResult = RegQueryValueEx(hKey, lpValueName, 0, &dwType, (LPBYTE)&dwValue, &dwSize);
RegCloseKey (hKey);
}
// Exit if a preference was found.
if(lResult == ERROR_SUCCESS)
{
// Return the data value
return dwValue;
}
else
printf("Preference: value not found!\n");
// Neither a policy nor a preference was found; return the default value.
return dwDefault;
}
int main()
{
LPTSTR lpValueName = "Browse For Folder Height";
DWORD dwDefault = 0x00000000;
DWORD ret = ReadValue(lpValueName, dwDefault);
printf("The value data for the \'%s\' value name is 0X%.8X(%d).\n", lpValueName, ret, ret);
return 0;
}
Policy: value not found!
The value data for the 'Browse For Folder Height' value name is 0X00000120(288).
Press any key to continue
Figure 3: Getting the Registry values.
The performance data contains information for a variable number of object types, instances per object, and counters per object type. Therefore, the number and size of blocks in the performance data varies. To ensure that your application correctly receives the performance data, you must use the offsets included in the performance structures to navigate through the data. Every offset is a count of bytes relative to the structure containing it.
Note:
The reason the system uses offsets instead of pointers is that pointers are not valid across process boundaries. The addresses that the process that installs the counters would store would not be valid for the process that reads the counters.
The following example displays the index and name of each object, along with the indexes and names of its counters. The object and counter names are stored in the registry, by index. This example creates a function, GetNameStrings(), to load the indexes and names of each object and counter from the registry into an array, so that they can be easily accessed. GetNameStrings() uses the following standard registry functions to access the data: RegOpenKey(), RegCloseKey(), RegQueryInfoKey(), and RegQueryValueEx().
This example creates the following functions for navigating the performance data: FirstObject, FirstInstance, FirstCounter, NextCounter, NextInstance, and NextCounter. These functions navigate the performance data by using the offsets stored in the performance structures.
// #define _WIN32_WINNT 0x0502 // Windows Server 2003 family
// For Win Xp, change accordingly...
#define _WIN32_WINNT 0x0501
// #define _WIN32_WINNT 0x0500 // Windows 2000
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
#include <string.h>
#define TOTALBYTES 20000
#define BYTEINCREMENT 2048
LPSTR lpNameStrings;
LPSTR *lpNamesArray;
// Functions used to navigate through the performance data.
PPERF_OBJECT_TYPE FirstObject(PPERF_DATA_BLOCK PerfData)
{
return((PPERF_OBJECT_TYPE)((PBYTE)PerfData + PerfData->HeaderLength));
}
PPERF_OBJECT_TYPE NextObject(PPERF_OBJECT_TYPE PerfObj)
{
return((PPERF_OBJECT_TYPE)((PBYTE)PerfObj + PerfObj->TotalByteLength));
}
PPERF_INSTANCE_DEFINITION FirstInstance(PPERF_OBJECT_TYPE PerfObj)
{
return((PPERF_INSTANCE_DEFINITION)((PBYTE)PerfObj + PerfObj->DefinitionLength));
}
PPERF_INSTANCE_DEFINITION NextInstance(PPERF_INSTANCE_DEFINITION PerfInst)
{
PPERF_COUNTER_BLOCK PerfCntrBlk;
PerfCntrBlk = (PPERF_COUNTER_BLOCK)((PBYTE)PerfInst + PerfInst->ByteLength);
return((PPERF_INSTANCE_DEFINITION)((PBYTE)PerfCntrBlk + PerfCntrBlk->ByteLength));
}
PPERF_COUNTER_DEFINITION FirstCounter(PPERF_OBJECT_TYPE PerfObj)
{
return((PPERF_COUNTER_DEFINITION) ((PBYTE)PerfObj + PerfObj->HeaderLength));
}
PPERF_COUNTER_DEFINITION NextCounter(PPERF_COUNTER_DEFINITION PerfCntr)
{
return((PPERF_COUNTER_DEFINITION)((PBYTE)PerfCntr + PerfCntr->ByteLength));
}
// Load the counter and object names from the registry to the
// global variable lpNamesArray.
BOOL GetNameStrings()
{
HKEY hKeyPerflib; // handle to registry key
HKEY hKeyPerflib009; // handle to registry key
DWORD dwMaxValueLen; // maximum size of key values
DWORD dwBuffer; // bytes to allocate for buffers
DWORD dwBufferSize; // size of dwBuffer
LPSTR lpCurrentString; // pointer for enumerating data strings
DWORD dwCounter; // current counter index
// Get the number of Counter items.
if(RegOpenKeyEx(
HKEY_LOCAL_MACHINE,
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Perflib",
0,
KEY_READ,
&hKeyPerflib) != ERROR_SUCCESS)
return FALSE;
else
printf("RegOpenKeyEx() is OK.\n");
dwBufferSize = sizeof(dwBuffer);
if(RegQueryValueEx(
hKeyPerflib,
"Last Counter",
NULL,
NULL,
(LPBYTE) &dwBuffer,
&dwBufferSize) != ERROR_SUCCESS)
return FALSE;
else
printf("RegQueryValueEx() is OK.\n");
RegCloseKey(hKeyPerflib);
// Allocate memory for the names array.
lpNamesArray = (LPTSTR *)malloc((dwBuffer+1) * sizeof(LPSTR));
if(lpNamesArray == NULL)
return FALSE;
// Open the key containing the counter and object names.
if(RegOpenKeyEx(
HKEY_LOCAL_MACHINE,
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Perflib\\009",
0,
KEY_READ,
&hKeyPerflib009) != ERROR_SUCCESS)
return FALSE;
else
printf("RegOpenKeyEx() is OK.\n");
// Get the size of the largest value in the key (Counter or Help).
if(RegQueryInfoKey(
hKeyPerflib009,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
&dwMaxValueLen,
NULL,
NULL) != ERROR_SUCCESS
)
return FALSE;
else
printf("RegQueryInfoKey() is OK.\n");
// Allocate memory for the counter and object names.
dwBuffer = dwMaxValueLen + 1;
lpNameStrings = (LPTSTR)malloc(dwBuffer * sizeof(CHAR));
if(lpNameStrings == NULL)
{
free(lpNamesArray);
return FALSE;
}
else
printf("Memory allocated for lpNameStrings.\n");
// Read the Counter value.
if(RegQueryValueEx(
hKeyPerflib009,
"Counter",
NULL,
NULL,
(LPBYTE)lpNameStrings,
&dwBuffer) != ERROR_SUCCESS)
return FALSE;
else
printf("RegQueryValueEx() is OK.\n");
printf("Please wait...\n");
// Load names into an array, by index.
for(lpCurrentString = lpNameStrings; *lpCurrentString;
lpCurrentString += (lstrlen(lpCurrentString)+1))
{
dwCounter = atol(lpCurrentString);
lpCurrentString += (lstrlen(lpCurrentString)+1);
lpNamesArray[dwCounter] = (LPSTR)lpCurrentString;
}
return TRUE;
}
// Display the indexes and/or names for all performance
// objects, instances, and counters.*
int;
LONG k;
// Get the name strings through the registry.
if(!GetNameStrings())
return FALSE;
// Allocate the buffer for the performance data.
PerfData = (PPERF_DATA_BLOCK) malloc(BufferSize);
if(PerfData == NULL)
return FALSE; = FirstObject(PerfData);
// Process all objects.
for(i=0; i < PerfData->NumObjectTypes; i++)
{
// Display the object by index and name.
printf("\nObject %ld: %s\n", PerfObj->ObjectNameTitleIndex,
lpNamesArray[PerfObj->ObjectNameTitleIndex]);
// Get the first counter.
PerfCntr = FirstCounter(PerfObj);
if(PerfObj->NumInstances > 0)
{
// Get the first instance.
PerfInst = FirstInstance(PerfObj);
// Retrieve all instances.
for(k=0; k < PerfObj->NumInstances; k++)
{
// Display the instance by name.
printf("\n\tInstance %S: \n", (char *)((PBYTE)PerfInst + PerfInst->NameOffset));
CurCntr = PerfCntr;
// Retrieve all counters.
for(j=0; j < PerfObj->NumCounters; j++)
{
// Display the counter by index and name.
printf("\t\tCounter %ld: %s\n", CurCntr->CounterNameTitleIndex, lpNamesArray[CurCntr->CounterNameTitleIndex]);
// Get the next counter.
CurCntr = NextCounter(CurCntr);
}
// Get the next instance.
PerfInst = NextInstance(PerfInst);
}
}
else
{
// Get the counter block.
PtrToCntr = (PPERF_COUNTER_BLOCK)((PBYTE)PerfObj + PerfObj->DefinitionLength);
// Retrieve all counters.
for(j=0; j < PerfObj->NumCounters; j++)
{
// Display the counter by index and name.
printf("\tCounter %ld: %s\n", PerfCntr->CounterNameTitleIndex,
lpNamesArray[PerfCntr->CounterNameTitleIndex]);
// Get the next counter.
PerfCntr = NextCounter(PerfCntr);
}
}
// Get the next object type.
PerfObj = NextObject(PerfObj);
}
// Release all the memory back to system...
free(lpNamesArray);
free(lpNameStrings);
free(PerfData);
return TRUE;
}
RegOpenKeyEx() is OK.
RegQueryValueEx() is OK.
RegOpenKeyEx() is OK.
RegQueryInfoKey() is OK.
Memory allocated for lpNameStrings.
RegQueryValueEx() is OK.
Please wait...
Object 2908: .NET CLR Data
Counter 2910: SqlClient: Current # pooled and nonpooled connections
Counter 2912: SqlClient: Current # pooled connections
Counter 2914: SqlClient: Current # connection pools
Counter 2916: SqlClient: Peak # pooled connections
Counter 2918: SqlClient: Total # failed connects
Counter 2920: SqlClient: Total # failed commands
Object 2922: .NET CLR Networking
Counter 2924: Connections Established
Counter 2926: Bytes Received
Counter 2928: Bytes Sent
Counter 2930: Datagrams Received
Counter 2932: Datagrams Sent
Object 2934: .NET CLR Memory
Instance _Global_:
Counter 2936: # Gen 0 Collections
Counter 2938: # Gen 1 Collections
Counter 2940: # Gen 2 Collections
Counter 2942: Promoted Memory from Gen 0
...
...
...
[trimmed]
Windows Registry and Automatic Program Running
In general, there are seven Run (this Run term used here as a general term) keys in the registry that cause programs to be run automatically as listed below:.
Windows Registry and Automatic Program Startup Sequence
Registry keys settings normally done during the programs installation (Setup). Many programs that you install are automatically run when you start your computer and load Windows. Unfortunately, there are programs that are not legitimate such as spyware, hijackers, trojans, bots, worms and viruses that load in this manner as well. Many malware scanner programs try to search the Registry keys and folders where these programs will start automatically during the boot process and Windows loading. Windows’s Msconfig.exe, can be used to list programs that are automatically started from some of these locations unfortunately, it only lists programs from a limited amount of startup keys. In the following section there are the various list of registry keys, values and files under certain folders that can be used to start a program when Windows boots. The keys have been arranged in the order they load whenever possible. Keep in mind, that some of the keys are set to load at the same time (asynchronous), so it is possible that the order will change on each boot up. These keys generally apply to Windows 95, 98, ME, NT, XP, and 2000 except whenever mentioned.
<Turning on the computer>
The keys start in the following order as Windows loads:
RunServicesOnce key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce
This key is designed to start services when a computer boots up. These entries can also continue running even after you log on, but must be completed before the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce registry can start loading its programs.
RunServices key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServices
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServices
This key is designed to start services as well. These entries can also continue running even after you log on, but must be completed before the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce registry can start loading its programs.
<Logon dialog box is displayed on screen>
After a user logs in the rest of the keys continue loading.
The RunServicesOnce and
RunServices keys are
loaded before the user logs into Windows 95, Windows 98, and Windows Me.
Because these two keys run asynchronously with the Logon dialog box, they can
continue to run after the user has logged on. However, since
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce
must load synchronously, its entries will not begin loading until after the
RunServicesOnce and
RunServices keys have
finished loading.
RunOnce/RunOnceEx key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnceEx
These keys\Software\Microsoft\Windows\CurrentVersion\Run,
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run,
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce, and
Startup Folders,
can be loaded. The RunOnce keys are ignored under Windows 2000 and Windows XP in Safe Mode. The RunOnce keys are not supported by Windows NT 3.51.
Run key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run
These are the most common startup locations for programs to install auto start from. By default these keys are not executed in Safe mode. If you prefix the value of these keys with an asterisk (*), it is forced to run in Safe Mode.
All Users Startup Folder:
For Windows XP, 2000, and NT, this folder is used for programs that should be auto started for all users who will login to this computer. It is generally can be found at:
User Profile Startup Folder:
This folder will be executed for the particular user who logs in. This folder is usually can be found in:
RunOnce Current User Key:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce
This key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run
These keys are generally used to load programs as part of a policy set in place on the computer or user if any.
UserInit value:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Userinit\system32\otherbadprogram.exe,
This will make both programs launch when you log in and is a common place for trojans, hijackers, and spyware to launch from.
Load value:
HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows\load
This key is not commonly used anymore, but can be used to auto start programs.
Notify key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify
This key is used to add a program that will run when a particular event occurs. Events include logon, logoff, startup, shutdown, startscreensaver, and stopscreensaver. When Winlogon.exe generates an event such as mentioned above, value:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Windows, which_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ShellServiceObjectDelayLoad
ShellServiceObjectDelayLoad key:.
Note: CLSID Key
A CLSID is a globally unique identifier that identifies a COM class object. If your server or container allows linking to its embedded objects, then you need to register a CLSID for each supported class of objects. The Registry entry is:
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID =
SharedTaskScheduler key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\SharedTaskScheduler
This section corresponds to files being loaded through the SharedTaskScheduler registry value for XP, NT, 2000 machines. The entries in this registry run automatically when you start windows.
Other Files:
Other files that programs can use to autostart during the bootup, generally used by older Windows OSes such as Win 95 and Win 98 that may still functional in some of the newer Windows OSes are listed in the following (the default %SystemRoot% normally C:\Windows or C:\WINNT):
1. C:\autoexec.bat
2. C:\config.sys
3. %SystemRoot%\wininit.ini - Usually used by setup programs to have a file run once and then get deleted.
4. %SystemRoot%\winstart.bat
5. %SystemRoot%\win.ini - [windows] "load=..."
6. %SystemRoot%\win.ini - [windows] "run=..."
7. %SystemRoot%\system.ini - [boot] "shell=..."
8. %SystemRoot%\system.ini - [boot] "scrnsave.exe"
9. %SystemRoot%\dosstart.bat - Used in Win95 or 98 when you select the "Restart in MS-DOS mode" in the shutdown menu.
10. %SystemRoot%\system\autoexec.nt
11. %SystemRoot%\system\config.nt
Well, a lot of location that can be used to start our program automatically and unfortunately these locations also shared by a lot of malware, viruses and worms :o).
Windows XP
The following information is specific to Windows XP. Its:
▪ Beginning with Windows XP, the values in the RunOnce keys are run only if the user has permission to delete entries from the respective key.
▪ The programs in the RunOnce key are run sequentially. Explorer waits until each one has exited before continuing with normal startup.
▪ By default, Run keys are ignored when the computer starts in Safe mode. Under the RunOnce keys, you can prefix a value name with an asterisk (*) to force the associated program to run even in Safe mode.
▪ You can prefix a RunOnce value name with an exclamation point (!) to defer deletion of the value until after the command runs.
▪.
Windows XP has two separate Run policies:
The items that you added to the Items to run at logon list through the Group Policy, start automatically the next time that you log on to Windows on your computer. A list of these items is located in the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer\Run
The legacy programs that are configured to start when you log on to your computer are listed in the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Many third-party programs, such as Adobe, can be included in this category. You can either enable or disable the legacy run list. You cannot modify it directly from within the Group Policy snap-in.
The RunOnceEx key has the following features:
▪ Status: A dialog box is displayed while the items contained in the registry key are being processed. The entries to be processed are grouped into sections and the dialog box highlights the current section being processed. You can disable the status dialog box feature.
▪.
▪ run the RunOnceEx registry key.
▪ Deterministic: The RunOnceEx registry key sorts the entries and sections alphabetically to force a deterministic order.
--------------------------- Windows Registry: Examples and Exploits, Part II-----------------------
------
Further reading and digging:
|
http://www.tenouk.com/ModuleP1.html
|
crawl-001
|
en
|
refinedweb
|
|< C Storage Class & Memory Functions | Main | C Run-Time 2 >| Site Index | Download |
MODULE A
IMPLEMENTATION SPECIFIC
MICROSOFT C Run-Time 1
My Training Period: hours
Note:). All programs are in debug mode, run on Windows 2000 and Xp Pro.
- The Compiler Options, Preprocessor Directives and other configuration settings can be accessed from Project menu → your_project_name Properties... sub menu.
- It also can be accessed by selecting your project directory (in the Solution Explorer pane) → Right click → Select the Properties menu as shown below.
Figure 1
- The following Figure is a project Property pages.
Figure 2.
- If you link your program from the command line without a compiler option that specifies a C run-time library, the linker will use LIBC.LIB by default.
- To build a debug version of your application, the _DEBUG flag must be defined and the application must be linked with a debug version of one of these libraries.
- However in this Module and that follows we will compile and link by using the Visual C++ .Net IDEs’ menus instead of command line :o).
- The run-time libraries also include .lib files that contain the iostream library and the Standard C++ Library. You should never have to explicitly link to one of these .lib files; the header files in each library link in the correct .lib file.
- The programs that use this old iostream library will need the .h extension for the header files.
- Before Visual C++ 4.2, the C run-time libraries contained the iostream library functions. In Visual C++ 4.2 and later, the old iostream library functions have been removed from LIBC.LIB, LIBCD.LIB, LIBCMT.LIB, LIBCMTD.LIB, MSVCRT.LIB, and MSVCRTD as shown in the following Table.
- The new iostream functions, as well as many other new functions, that exist in the Standard C++ Library are shown in the following Table.
- The programs that use this new iostream library will need the header files without the .h extension such as <string>.
- The Standard C++ Library and the old iostream library are incompatible, that is they cannot be mixed and only one of them can be linked with your project.
- The old iostream library created when the standard is not matured yet.
-, for example:
▪ If you include a Standard C++ Library header in your code, a Standard C++ Library will be linked in automatically by Visual C++ at compile time. For example:
#include <ios>
▪ If you include an old iostream library header, an old iostream library will be linked in automatically by Visual C++ at compile time. For example:
#include <ios.h>
- preprocessor directives are automatically defined. Also read Module 23 for the big picture.
- As a conclusion, for typical C and C++ programs, by providing the proper header files just let the compiler determine for you which libraries are appropriate to be linked in.
- The following sections will dive more detail some of the functions available in the C run-time library.
- These functions used to create routines that can be used to automate many tasks in Windows that not available in the standard C/C++.
- We will start with category, then the functions available in that category and finally followed by program examples that use some of the functions.
- The functions discussed here mostly deal with directories and files.
- For complete information please refer to Microsoft Visual C++ documentation (online MSDN: Microsoft Visual C++). As a remainder, the following are the things that must be fully understood when you want to use a function, and here, _chsize() is used as an example.
1. What is the use of the function? This will match with "what are you going to do or create?" Keep in mind that to accomplish some of the tasks you might need more (several) than one function. For example:
_chsize() used to change the file size.
2. What is the function prototype, so we know how to write (call) a proper syntax of the function. For example:
int _chsize(int handle, long size);
3. From the function prototype, how many parameters, the order and what are their types, so we can create the needed variables. For example:
4. What header files need to be included? For example the _chsize needs:
<io.h>
5. What is the return type and value? For example:
_chsize returns 0 if the file size is successfully changed. A return value of –1 indicates an error: errno is set to EACCES if the specified file is locked against access, to EBADF if the specified file is read-only or the handle is invalid, or to ENOSPC if no space is left on the device.
- Hence, you are ready to use the function in any of your programs. If you still blur about functions, please read C & C++ Functions tutorial. Not all the needed information is provided here and for complete one, please refers to the Microsoft Visual C++/MSDN/SDK Platform documentations or HERE.
- For C++, together with the classes, it is used when you develop programs using Microsoft Foundation Class (MFC) and Automatic Template Library (ATL).
- The notation convention used for identifiers in MSDN documentation is Hungarian Notation and is discussed C/C++ Notations.
--------------------The Story and Program Examples---------------------
- Functions available in this category used to access, modify, and obtain information about the directory structure and are listed in the following Table.
- Notice that most of the function names are similar to the standard C wherever available, except prefixed with underscore ( _ ) :o). The functions available in this category are listed in the following Table.
Directory Management Functions
- The following Table lists functions used for directory control and management.
- The following is an example of the needed information in order to use the _getdrives() function.
- The following program example uses the _getdrives() function to list the available logical drives in the current machine.
#include <windows.h>
#include <direct.h>
#include <stdio.h>
#include <tchar.h>
//Buffer, be careful with terminated NULL
//Must match with ++mydrives[1]...that is one space
//Example if no one space: "A:"--> ++mydrives[0];
TCHAR mydrives[] = " A: ";
//Or char mydrives[] = {" A: "};
//Or char mydrives[] = " A: ";
int main()
{
//Get the drives bit masks...1 is available, 0 is not available
//A = least significant bit...
ULONG DriveMask = _getdrives();
//If something wrong
if(DriveMask == 0)
printf("_getdrives() failed with failure code: %d\n", GetLastError());
else
{
printf("This machine has the following logical drives:\n");
while (DriveMask)
{ //List all the drives...
if(DriveMask & 1)
printf(mydrives);
//Go to the next drive strings with one space
++mydrives[1];
//Shift the bit masks binary
//to the right and repeat
DriveMask >>= 1;
}
printf("\n");
}
return 0;
}
The output:
This machine has the following logical drives:
A: C: D: E: F: G: H: I: J: K: L:
Press any key to continue
- The following is an example of the needed information in order to use the _getdiskfree() function.
- The following program example uses the _getdiskfree() function to list logical drives information in the current machine.
#include <windows.h>
#include <direct.h>
#include <stdio.h>
#include <tchar.h>
TCHAR g_szText[] = _T("Drive Total_clus Available_clus Sec/Cluster Bytes/Sec\n");
TCHAR g_szText1[] = _T("----- ---------- -------------- ----------- ---------\n");
TCHAR g_szInfo[] = _T("-> \n");
//For data display format...
//Right justified, thousand comma separated and other format
//for displayed data
void utoiRightJustified(TCHAR* szLeft, TCHAR* szRight, unsigned uValue)
{
TCHAR* szCur = szRight;
int nComma = 0;
if(uValue)
{
while(uValue && (szCur >= szLeft))
{
if(nComma == 3)
{
*szCur = ',';
nComma = 0;
}
else
{
*szCur = (uValue % 10) | 0x30;
uValue /= 10;
++nComma;
}
--szCur;
}
}
else
{
*szCur = '0';
--szCur;
}
if(uValue)
{
szCur = szLeft;
while(szCur <= szRight)
{//If not enough field to display the data...
*szCur = '*';
++szCur;
}
}
}
int main()
{
TCHAR szMsg[4200];
struct _diskfree_t df = {0};
//Search drives and assigns the bit masks to
//uDriveMask variable...
ULONG uDriveMask = _getdrives();
unsigned uErr, uLen, uDrive;
printf("clus - cluster, sec - sector\n");
printf(g_szText);
printf(g_szText1);
for(uDrive = 1; uDrive <= 26; ++uDrive)
{
//If the drive is available...
if(uDriveMask & 1)
{ //Call _getdiskfree()...
uErr = _getdiskfree(uDrive, &df);
//Provide some storage
memcpy(szMsg, g_szInfo, sizeof(g_szInfo));
szMsg[3] = uDrive + 'A' - 1;
//If _getdiskfree() is no error, display the data
if(uErr == 0)
{
utoiRightJustified(szMsg+4, szMsg+15, df.total_clusters);
utoiRightJustified(szMsg+18, szMsg+29, df.avail_clusters);
utoiRightJustified(szMsg+27, szMsg+37, df.sectors_per_cluster);
utoiRightJustified(szMsg+40, szMsg+50, df.bytes_per_sector);
}
else
{//Print system message and left other fields empty
uLen = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, NULL, uErr, 0, szMsg+8, 4100, NULL);
szMsg[uLen+6] = ' ';
szMsg[uLen+7] = ' ';
szMsg[uLen+8] = ' ';
}
printf(szMsg);
}
//shift right the found drive bit masks and
//repeat the process
uDriveMask >>= 1;
}
return 0;
}
The output:
clus - cluster, sec - sector
Drive Total_clus Available_clus Sec/Cluster Bytes/Sec
----- ---------- -------------- ----------- ---------
-> A 2,847 834 1 512
-> C 2,560,351 62,315 8 512
-> D 2,560,351 1,615,299 8 512
-> E 2,560,351 2,310,975 8 512
-> F 2,560,351 2,054,239 8 512
-> G 2,353,514 2,039,396 8 512
-> H 2,560,351 2,394,177 8 512
-> I 2,381,628 1,618,063 8 512
-> J 321,935 0 1 2,048
-> L 63,419 20,706 16 512
Press any key to continue
- Note that J: is CD-RW, L: is thumb drive and A: is a floppy. For floppy and CD-ROM, you have to insert the media.
- The following is an example of the needed information in order to use the _getdrive() function.
- The following Table lists the needed information in order to use the _chdir(), _wchdir() functions.
- will be changed as well.
- For example, if A is the default drive letter and \BIN is the current working directory, the following call changes the current working directory for drive C and establishes C as the new default drive:
_chdir("c:\\te
|
http://www.tenouk.com/ModuleA.html
|
crawl-001
|
en
|
refinedweb
|
A continuation from previous Module. The source code for this module is: C/C++ pointers program source codes. The lab worksheets for your practice are: C/C++ pointers part 1 and C/C++ pointers part 2. Also the exercises in the Indirection Operator lab worksheet 1, lab worksheet 2 and lab worksheet 3.
Element of an array are stored in sequential memory locations with the first element in the lowest address.
Subsequent array elements, those with an index greater than 0 are stored at higher addresses.
As mentioned before, array of type int occupies 2 byte of memory and a type float occupies 4 byte. Hence the size is depend on the type and the platform (e.g. 32, 64 bits system).
So, for float type, each element is located 4 bytes higher than the preceding element, and the address of each array element is 4 higher than the address of the preceding element.
For example, relationship between array storage and addresses for a 6-elements int array and a 3-elements float array is illustrated below.
Figure 8.9
The x variable without the array brackets is the address of the first element of the array, x[0].
The element is at address of 1000; the second element is 1002 and so on.
As conclusion, to access successive elements of an array of a particular data type, a pointer must be increased by the sizeof(data_type). sizeof() function returns the size in bytes of a C/C++ data type. Let take a look at the following example:
// demonstrates the relationship between addresses
// and elements of arrays of different data type
#include <stdio.h>
void main()
{
// declare three arrays and a counter variable
int i[10], x;
float f[10];
double d[10];
// print the table heading
printf("\nArray's el. add of i[x] add of f[x] add of d[x]");
printf("\n|================================");
printf("======================|");
// print the addresses of each array element
for(x=0; x<10; x++)
printf("\nElement %d:\t%p\t%p\t%p",x,&i[x],&f[x],&d[x]);
printf("\n|================================");
printf("======================|\n");
printf("\nLegends:");
printf("\nel.- element, add - address\n");
printf("\ndifferent pc, shows different addresses\n");
}
Notice the difference between the element addresses.
12FEB4 – 12FEB0 = 4 bytes for int
12FE78 – 12FE74 = 4 bytes float
12FE24 – 12FE1C = 8 bytes double
The size of the data type depends on the specification of your compiler, whether your target is 16, 32 or 64 bits systems, the output of the program may be different for different PC. The addresses also may different.
Try another program example.
// demonstrates the use of pointer arithmetic to access
// array elements with pointer notation
#include <stdio.h>
#define MAX 10
void main()
{
// declare and initialize an integer array
int array1[MAX] = {0,1,2,3,4,5,6,7,8,9};
// declare a pointer to int and an int variable
int *ptr1, count;
// declare and initialize a float array
float array2[MAX] = {0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9};
// declare a pointer to float
float *ptr2;
// initialize the pointers
// just an array name is the pointer to the
// 1st array element, both left value and right value
// of the expression are pointers types...
ptr1 = array1;
ptr2 = array2;
// print the array elements
printf("\narray1 values array2 values");
printf("\n-------------------------");
// iterate or loop the arrays and display the content...
for(count = 0; count < MAX; count++)
printf("\n%d\t\t%f", *ptr1++, *ptr2++);
printf("\n-------------------------\n");
}
Let make it clear, if an array named list[ ] is a declared array, the expression *list is the array’s first element, *(list + 1) is the array’s second element, and so on.
Generally, the relationship is as follows:
*(list) == list[0] // first element
*(list + 1) == list[1] // second element
*(list + 2) == list[2] // third element
...
...
*(array + n) == list[n] // the nth element
So, you can see the equivalence of array subscript notation and array pointer notation.
Pointers may be arrayed like any other data type. The declaration for an int pointer array of size 20 is:
int *arrayPtr[20];
To assign the address of an integer variables called var to the first element of the array, we could write something like this:
// assign the address of variable var to the first arrayPtr element
arrayPtr[0] = &var;
Graphically can be depicted as follows:
Figure 8.10
To find the value stored in var, we could write something like this:
*arrayPtr[0]
To pass an array of pointers to a function, we simply call the function with the array’s name without any index/subscript, because this is an automatically a pointer to the first element of the array, as explained before.
For example, to pass the array named arrayPtr to viewArray function, we write the following statement:
viewArray(arrayPtr);
The following program example demonstrates the passing of a pointer array to a function. It first declares and initializes the array variable var (not a pointer array).
Then it assigns the address of each element (var[i]) to the corresponding pointer element (arrayPtr[i]).
Next, the array arrayPtr is passed to the array’s parameter q in the function viewArray(). The function displays the elements pointed to by q (that is the values of the elements in array var) and then passes control back to main().
// a program that passes a pointer array to a function
#include <iostream>
using namespace std;
// a function prototype for viewArray
void viewArray(int *[ ]);
void main()
{
// declare and initialize the array variables...
int i,*arrayPtr[7], var[7]={3,4,4,2,1,3,1};
// loop through the array...
for(i=0; i<7; i++)
// arrayPtr[i] is assigned with the address of var[i]
arrayPtr[i] = &var[i];
// a call to function viewArray,
// pass along the pointer to the
//1st array element
viewArray(arrayPtr);
cout<<endl;
}
// an arrayPtr is now passed to parameter q,
// q[i] now points to var[i]
void viewArray(int *q[ ])
{
int j;
// displays the element var[i] pointed to by q[j]
// followed by a space. No value is returned
// and control reverts to main()
for(j = 0; j < 7; j++)
cout<<*q[j]<<" ";
}
-----------------------------------------------------------------------------------------------
Graphically, the construct of a pointer to pointer can be depicted as shown below. pointer_one is the first pointer, pointing to the second pointer, pointer_two and finally pointer_two is pointing to a normal variable num that hold integer 10.
Figure 8.11
Another explanation, from the following figure, a pointer to a variable (first figure) is a single indirection but if a pointer points to another pointer (the second figure), then we have a double or multiple indirections.
Figure 8.12
For the second figure, the second pointer is not a pointer to an ordinary variable, but rather, a pointer to another pointer that points to an ordinary pointer.
In other words, the second pointer points to the first pointer, which in turn points to the variable that contains the data value.
In order to indirectly access the target value pointed to by a pointer to a pointer, the asterisk operator must be applied twice. For example, the following declaration:
int **SecondPtr;
The code tells the compiler that SecondPtr is a pointer to a pointer of type integer. Pointer to pointer is rarely used but you will find it regularly in programs that accept argument(s) from command line.
Consider the following declarations:
char chs; /* a normal character variable */
char *ptchs; /* a pointer to a character */
char **ptptchs; /* a pointer to a pointer to a character */
If the variables are related as shown below:
Figure 8.13
We can do some assignment like this:
chs = 'A';
ptpch = &chs;
ptptpch = ptchs;
Recall that char * refers to a NULL terminated string. So one common way is to declare a pointer to a pointer to a string something like this:
Figure 8.14
Taking this one stage further we can have several strings being pointed to by the integer pointers (instead of char) as shown below.
Figure 8.15
Then, we can refer to the individual string by using ptptchs[0], ptptchs[1],…. and generally, this is identical to declaring:
char *ptptchs[ ] /* an array of pointer */
Or from Figure 8.15:
char **ptptchs
Thus, programs that accept argument(s) through command line, the main() parameter list is declared as follows:
int main(int argc, char **argv)
Or something like this:
int main(int argc, char *argv[ ])
Where the argc (argument counter) and argv (argument vector) are equivalent to ptchs and ptptchs respectively.
For example, program that accept command line argument(s) such as echo:
C:\>echo This is command line argument
This is command line argument
Here we have:
Figure 8.16
/* a program to print arguments from command line */
/* run this program at the command prompt */
#include <stdio.h>
/*or int main(int argc, *argv[ ])*/
int main(int argc, char **argv)
{
int i;
printf("argc = %d\n\n", argc);
for (i=0; i<argc; ++i)
printf("argv[%d]: %s\n", i, argv[i]);
return 0;
}
Another silly program example :o):
// pointer to pointer...
#include <stdio.h>
int main(void)
{
int **theptr;
int *anotherptr;
int data = 200;
anotherptr = &data;
// assign the second pointer address to the first pointer...
theptr = &anotherptr;
printf("The actual data, **theptr = %d\n", **theptr);
printf("\nThe actual data, *anotherptr = %d\n", *anotherptr);
printf("\nThe first pointer pointing to an address, theptr = %p\n", theptr);
printf("\nThis should be the second pointer address, &anotherptr = %p\n", &anotherptr);
printf("\nThe second pointer pointing to address(= hold data),\nanotherptr = %p\n", anotherptr);
printf("\nThen, its own address, &anotherptr = %p\n", &anotherptr);
printf("\nThe address of the actual data, &data = %p\n", &data);
printf("\nNormal variable, the data = %d\n", data);
return 0;
}
Because of C functions have addresses we can use pointers to point to C functions. If we know the function’s address then we can point to it, which provides another way to invoke it.
Function pointers are pointer variables which point to functions. Function pointers can be declared, assigned values and then used to access the functions they point to. The declaration is as the following:
int (*funptr)();
Here, funptr is declared as a pointer to a function that returns int data type.
The interpretation is the de-referenced value of funptr, that is (*funptr) followed by () which indicates a function, which returns integer data type.
The parentheses are essential in the declarations because of the operators’ precedence. The declaration without the parentheses as the following:
int *funptr();
Will declare a function funptr that returns an integer pointer that is not our intention in this case. In C, the name of a function, used in an expression by itself, is a pointer to that function. For example, if a function, testfun() is declared as follows:
int testfun(int x);
The name of this function, testfun is a pointer to that function. Then, we can assign them to pointer variable funptr, something like this:
funptr = testfun;
The function can now be accessed or called, by dereferencing the function pointer:
/* calls testfun() with x as an argument then assign to the variable y */
y = (*funptr)(x);
Function pointers can be passed as parameters in function calls and can be returned as function values.
Use of function pointers as parameters makes for flexible functions and programs. It’s common to use typedefs with complex types such as function pointers. You can use this typedef name to hide the cumbersome syntax of function pointers. For example, after defining:
typedef int (*funptr)();
The identifier funptr is now a synonym for the type of ‘a pointer to function takes no arguments, returning int type’. This typedef would make declaring pointers such as testvar as shown below, considerably easier:
funptr testvar;
Another example, you can use this type in a sizeof() expression or as a function parameter as shown below:
/* get the size of a function pointer */
unsigned ptrsize = sizeof (int (*funptr)());
/* used as a function parameter */
void signal(int (*funptr)());
Let try a simple program example using function pointer.
/* invoking function using function pointer */
#include <stdio.h>
int somedisplay();
int main()
{
int (*func_ptr)();
/* assigning a function to function pointer
as normal variable assignment */
func_ptr = somedisplay;
/* checking the address of function */
printf("\nAddress of function somedisplay() is %p", func_ptr);
/* invokes the function somedisplay() */
(*func_ptr)() ;
return 0;
}
int somedisplay()
{
printf("\n--Displaying some texts--\n");
return 0;
}
Another example with an argument.
#include <stdio.h>
/* function prototypes */
void funct1(int);
void funct2(int);
/* making FuncType an alias for the type
'function with one int argument and no return value'.
This means the type of func_ptr is 'pointer to function
with one int argument and no return value'. */
typedef void FuncType(int);
int main(void)
{
FuncType *func_ptr;
/* put the address of funct1 into func_ptr */
func_ptr = funct1;
/* call the function pointed to by func_ptr with an argument of 100 */
(*func_ptr)(100);
/* put the address of funct2 into func_ptr */
func_ptr = funct2;
/* call the function pointed to by func_ptr with an argument of 200 */
(*func_ptr)(200);
return 0;
}
/* function definitions */
void funct1 (testarg)
{printf("funct1 got an argument of %d\n", testarg);}
void funct2 (testarg)
{printf("funct2 got an argument of %d\n", testarg);}
The following codes in the program example:
func_ptr = funct1;
(*func_ptr)(100);
Can also be written as:
func_ptr = &funct1;
(*func_ptr)(100);
Or
func_ptr = &funct1;
func_ptr(100);
Or
func_ptr = funct1;
func_ptr(100);
As we have discussed before, we can have an array of pointers to an int, float and string. Similarly we can have an array of pointers to a function. It is illustrated in the following program example.
/* an array of pointers to function */
#include <stdio.h>
/* functions' prototypes */
int fun1(int, double);
int fun2(int, double);
int fun3(int, double);
/* an array of a function pointers */
int (*p[3]) (int, double);
int main()
{
int i;
/* assigning address of functions to array pointers */
p[0] = fun1;
p[1] = fun2;
p[2] = fun3;
/* calling an array of function pointers with arguments */
for(i = 0; i <= 2; i++)
(*p[i]) (100, 1.234);
return 0;
}
/* functions' definition */
int fun1(int a, double b)
{
printf("a = %d b = %f", a, b);
return 0;
}
int fun2(int c, double d)
{
printf("\nc = %d d = %f", c, d);
return 0;
}
int fun3(int e, double f)
{
printf("\ne = %d f = %f\n", e, f);
return 0;
}
----------------------------------------------------------------------------
In the above program we take an array of pointers to function int (*p[3]) (int, double). Then, we store the addresses of three function fun1(), fun2(), fun3() in array (int *p[ ]). In the for loop we consecutively call each function using their addresses stored in array.
For function and array, the only way an array can be passed to a function is by means of a pointer.
Before this, an argument is a value that the calling program passes to a function. It can be int, a float or any other simple data type, but it has to be a single numerical value.
The argument can, therefore, be a single array element, but it cannot be an entire array.
If an entire array needs to be passed to a function, then you must use a pointer.
As said before, a pointer to an array is a single numeric value (the address of the array’s first element).
Once the value of the pointer (memory address) is passed to the function, the function knows the address of the array and can access the array elements using pointer notation.
Then how does the function know the size of the array whose address it was passed?
Remember! The value passed to a function is a pointer to the first array element. It could be the first of 10 elements or the first of 10000 or what ever the array size.
The method used for letting a function knows an array’s size, is by passing the function the array size as a simple int type argument.
Thus the function receives two arguments:
-
A pointer to the first array element and
-
An integer specifying the number of elements in the array, the array size.
The following program example illustrates the use of a pointer to a function. It uses the function prototype float (*ptr) (float, float) to specify the number and types of arguments. The statement ptr = &minimum assigns the address of minimum() to ptr.
The statement small = (*ptr)(x1, x2); calls the function pointed to by (*ptr), that is the function minimum() which then returns the smaller of the two values.
// pointer to a function
#include <iostream>
using namespace std;
// function prototypes...
float minimum(float, float);
// (*ptr) is a pointer to function of type float
float (*ptr)(float, float);
void main()
{
float x1, x2, small;
// assigning address of minimum() function to ptr
ptr = minimum;
cout<<"\nEnter two numbers, separated by space: ";
cin>>x1>>x2;
// call the function pointed by ptr small has the return value
small = (*ptr)(x1, x2);
cout<<"\smaller number is "<<small<<endl;
}
float minimum(float y1, float y2)
{
if (y1 < y2)
return y1;
else
return y2;
}
Study the program's source code and the output.
C & C++ programming tutorials
Also the exercises in the Indirection Operator lab worksheet 1, lab worksheet 2 and lab worksheet 3.
|
http://www.tenouk.com/Module8a.html
|
crawl-001
|
en
|
refinedweb
|
#include <dense.h>
The class LinBox::Dense builds on this base.
Currently, only dense vectors are supported when doing matrix-vector applies.
Reimplemented in DenseMatrix, DenseMatrix< Field >, and DenseMatrix< Domain >.
The raw iterator is a method for accessing all entries in the matrix in some unspecified order. This can be used, e.g. to reduce all matrix entries modulo a prime before passing the matrix into an algorithm.
[inline]
Constructor.
Constructor from a matrix stream
Get a pointer on the storage of the elements
Get the number of rows in the matrix
Get the number of columns in the matrix
Element()
Resize the matrix to the given dimensions The state of the matrix's entries after a call to this method is undefined
Read the matrix from an input stream
Write the matrix to an output stream
Set the entry at the (i, j) position to a_ij.
Get a writeable reference to the entry in the (i, j) position.
Get a read-only reference to the entry in the (i, j) position.
Copy the (i, j) entry into x, and return a reference to x. This form is more in the Linbox style and is provided for interface compatibility with other parts of the library
Retrieve a reference to a row. Since rows may also be indexed, this allows A[i][j] notation to be used.
Compute column density
[protected]
|
http://www.linalg.org/linbox-html/classLinBox_1_1DenseMatrixBase.html
|
crawl-001
|
en
|
refinedweb
|
#include <sparse.h>
Inheritance diagram for SparseMatrixFactory:
[inline]
[virtual]
Given a field and vector type, construct a black box for the matrix over that field and using that vector type. This should be implemented by the user
Implements BlackboxFactory< Field, SparseMatrix< Field, Row > >.
[inline, virtual]
Compute and return the max-norm of the matrix.
Give the row dimension of the matrix
Give the column dimension of the matrix
Compute and return the hadamard bound of the matrxi.
|
http://www.linalg.org/linbox-html/classLinBox_1_1SparseMatrixFactory.html
|
crawl-001
|
en
|
refinedweb
|
One common request I get is how Jounce can work with the Navigation Framework.
My first reply is always, "Why do you want to use that?" As you can see in previous posts, the Jounce navigation works perfectly fine with region management to manage your needs. If you want the user to be able to "deep link" to a page, you can easily process the query string and parse it into the
InitParams for the application and deal with them there.
For the sake of illustration, however, I wanted to show one way Jounce can work with an existing navigation framework. In fact, to make it easy to follow along, the quick start example works mainly from the "Navigaton Application" template provided with Silverlight.
As a quick side note, I am very much aware of the
INavigationContentLoader interface. This may be the way to go and in the future I might write an adapter for Jounce, but I just don't see a compelling need to have URL-friendly links as I typically write applications that act like applications in Silverlight, not ones that try to mimic the web by having URLs.
The example here is available with the latest Jounce Source code (it's not part of an official release as of this writing so you can download it using the "Latest Version - Download" link in the upper right).
To start with, I simply created a new Silverlight navigation application.
Without overriding the default behavior, the navigation framework creates a new instance of the views you navigate to. To get around this model which I believe is wasteful and has undesired side effects, I changed the mapping for the various views to pass to one control that manages the navigation for me:
<uriMapper:UriMapper> <uriMapper:UriMapping <uriMapper:UriMapping <uriMapper:UriMapping </uriMapper:UriMapper>
Notice how I can translate the path to a view parameter, and that I am also introducing a mapping for "ShowText" that we'll use to show how you can grab parameters.
The
JounceNavigation control will get a new copy every time, but by using MEF it will guarantee we always access the same container. To do this, I created
NavigationContainer and it contains a single content control with a region so the region is only exported once:
<ContentControl HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" Regions:ExportAsRegion.
In the code-behind, I simply export it:
namespace SilverlightNavigation.Views { [Export] public partial class NavigationContainer { public NavigationContainer() { InitializeComponent(); } } }
Now we can use this single container in the
JounceNavigation control. What we want to do is attach it when we navigate to the control, and detach it when we navigate away (a control can only have one parent, so if we don't detach it, we'll get an error when the next view is created if the previous view hasn't been garbage-collected yet).
The navigation control also does a few more things to integrate with the navigation framework. It will remember the last view so it can deactivate the view when navigating away. It will default to the "Home" view but basically takes any view passed in (based on the uri mappings we defined earlier) and raises the Jounce navigation event. Provided the view targets the region in our static container, it will appear there. Normally I'd do all of this in a view model but wanted to show it in code-behind for the sake of brevity (and to show how Jounce plays nice with standard controls as well).
public partial class JounceNavigation { [Import] public IEventAggregator EventAggregator { get; set; } [Import] public NavigationContainer NavContainer { get; set; } private static string _lastView = string.Empty; public JounceNavigation() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); LayoutRoot.Children.Add(NavContainer); if (!string.IsNullOrEmpty(_lastView)) return; EventAggregator.Publish("Home".AsViewNavigationArgs()); _lastView = "Home"; } protected override void OnNavigatedTo(NavigationEventArgs e) { if (NavigationContext.QueryString.ContainsKey("view")) { var newView = NavigationContext.QueryString["view"]; _lastView = newView; EventAggregator.Publish(_lastView.AsViewNavigationArgs()); EventAggregator.Publish(NavigationContext); } } protected override void OnNavigatingFrom(NavigatingCancelEventArgs e) { if (!string.IsNullOrEmpty(_lastView)) { EventAggregator.Publish(new ViewNavigationArgs(_lastView) {Deactivate = true}); } LayoutRoot.Children.Remove(NavContainer); } }
Notice that we publish two events. The first is the view navigation to wire in the target view. The second is a
NavigationContext event. The navigation context contains all of the query string information. Any view that needs to pull values from the query string can simply listen for this event. Because the view navigation is called first, the view will be in focus and ready when it receives the context message to parse any parameters.
To demonstrate this, let's look at the
TextView control. When you pass text in the url, it will simply display it. The XAML looks like this:
<Grid x: <TextBlock x: </Grid>
The code-behind looks like this:
[ExportAsView("TextView")] [ExportViewToRegion("TextView", "MainContainer")] public partial class TextView : IEventSink<NavigationContext>, IPartImportsSatisfiedNotification { [Import] public IEventAggregator EventAggregator { get; set; } public TextView() { InitializeComponent(); } public void HandleEvent(NavigationContext publishedEvent) { if (publishedEvent.QueryString.ContainsKey("text")) { TextArea.Text = publishedEvent.QueryString["text"]; } } public void OnImportsSatisfied() { EventAggregator.SubscribeOnDispatcher(this); } }
Pretty simple - it exports as a view name, targets the main container region, and then registers as the event sink for
NavigationContext messages. In this case we only have one listener. In more complex scenarios with multiple view models listening, the view model would simply inspect the "view" parameter to make sure it matches the target view (it could easily find this in a generic way by asking the view model router) and ignore the message if it does not.
To convert the "Home" and the "About" page took only two steps.
First, I changed them from
Page controls to
UserControl controls. I simply had to change the tag in XAML and remove the base class tag in the code-behind and the conversion was complete. Second, I tagged them as views and exported them to the main region:
namespace SilverlightNavigation.Views { [ExportAsView("About")] [ExportViewToRegion("About", "MainContainer")] public partial class About { public About() { InitializeComponent(); } } }
That's it - now I have a fully functional Jounce application that uses the navigation framework and handles URL parameters. You can click on the "text" tab to see the sample text and then change the URL to confirm it parses the additional text you create.
|
https://csharperimage.jeremylikness.com/2010/11/jounce-part-5-navigation-framework.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
std::experimental::parallel::reduce
From cppreference.com
< cpp | experimental
1) same as reduce(first, last, typename std::iterator_traits<InputIt>::value_type{})
5) Reduces the range [first; last), possibly permuted and aggregated in unspecified manner, along with the initial value
initover
binary_op.
2,4,6) Same as (1,3,5), but executed according to
policy
The behavior is non-deterministic if
binary_op is not associative or not commutative.
The behavior is undefined if
binary_op modifies any element or invalidates any iterator in [first; last).
[edit] Parameters
[edit] Return value
Generalized sum of
init and
*first,
*(first+1), ...
* elements of the range may be grouped and rearranged in arbitrary order
[edit] Complexity
O(last - first) applications
reduce is the out-of-order version of std::accumulate:
Run this code
#include <iostream> #include <chrono> #include <vector> #include <numeric> #include <experimental/execution_policy> #include <experimental/numeric>::experimental::parallel::reduce( std::experimental::parallel::par, v.begin(), v.end()); auto t2 = std::chrono::high_resolution_clock::now(); std::chrono::duration<double, std::milli> ms = t2 - t1; std::cout << "parallel::reduce result " << result << " took " << ms.count() << " ms\n"; } }
Possible output:
std::accumulate result 5000003.50000 took 12.7365 ms parallel::reduce result 5000003.50000 took 5.06423 ms
|
https://en.cppreference.com/w/cpp/experimental/reduce
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
import "debug/gosym"
Package gosym implements access to the Go symbol and line number tables embedded in Go binaries generated by the gc compilers.
DecodingError represents an error during the decoding of the symbol table.
func (e *DecodingError) Error() string
type Func struct { Entry uint64 *Sym End uint64 Params []*Sym // nil for Go 1.3 and later binaries Locals []*Sym // nil for Go 1.3 and later binaries FrameSize int LineTable *LineTable Obj *Obj }
A Func collects information about a single function..
NewLineTable returns a new PC/line table corresponding to the encoded data. Text must be the start address of the corresponding text segment.
LineToPC returns the program counter for the given line number, considering only program counters before maxpc. Callers should use Table's LineToPC method instead. is a function symbol, the corresponding Func Func *Func }
A Sym represents a single symbol table entry.
BaseName returns the symbol name without the package or receiver name.
PackageName returns the package part of the symbol name, or the empty string if there is none.
ReceiverName returns the receiver type name of this symbol, or the empty string if there is none.
Static reports whether this symbol is static (not visible outside its file).
type Table struct { Syms []Sym // nil for Go 1.3 and later binaries.
NewTable decodes the Go symbol table (the ".gosymtab" section in ELF), returning an in-memory representation. Starting with Go 1.3, the Go symbol table no longer includes symbol data..
UnknownFileError represents a failure to find the specific file in the symbol table.
func (e UnknownFileError) Error() string
UnknownLineError represents a failure to map a line to a program counter, either because the line is beyond the bounds of the file or because there is no code on the given line.
func (e *UnknownLineError) Error() string
Package gosym imports 6 packages (graph) and is imported by 70 packages. Updated 2018-06-08. Refresh now. Tools for package owners.
|
https://godoc.org/debug/gosym
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Here is a listing of C++ quiz on “Large Objects” along with answers, explanations and/or solutions:
1. How to store the large objects in c++ if it extends its allocated memory?
a) memory heap
b) stack
c) queue
d) none of the mentioned
View Answer
Explanation: None.
2. When we are using heap operations what do we need to do to save the memory?
a) rename the objects
b) delete the objects after processing
c) both rename & delete the objects
d) none of the mentioned
View Answer
Explanation: when you allocate memory from the heap, you must remember to clean up objects when you’re done! Failure to do so is called a memory leak.
3. Which container in c++ will take large objects?
a) string
b) class
c) vector
d) none of the mentioned
View Answer
Explanation: Because the vector is mainly used to store large objects for the game
programming and other operations etc.
4. What is the output of this program?
#include <iostream>
using namespace std;
class sample
{
public:
sample()
{
cout << "X::X()" << endl;
}
sample( sample const & )
{
cout << "X::X( X const & )" << endl;
}
sample& operator=( sample const & )
{
cout << "X::operator=(X const &)" << endl;
}
};
sample f()
{
sample tmp;
return tmp;
}
int main()
{
sample x = f();
return 0;
}
a) X::operator=(X const &)
b) X::X( X const & )
c) X::X()
d) None of the mentioned
View Answer
Explanation: As we are passing the object without any attributes it will return as X::X().
Output:
$ g++ large.cpp $ a.out X::X()
5. How to stop your program from eating so much ram?
a) Find a way to work with the data one at a time
b) Declare it in program memory, instead of on the stack
c) Use the hard drive, instead of RAM
d) All of the mentioned
View Answer
Explanation: None.
6. Which option is best to eliminate the memory problem?
a) use smart pointers
b) use raw pointers
c) use virtual destructor
d) use smart pointers & virtual destructor
View Answer
Explanation: Virtual destructor means is that the object is destructed in reverse order in which it was constructed and the smart pointer will delete the object from memory when the object goes out of scope.
7. What is the size of the heap?
a) 10MB
b) 500MB
c) 1GB
d) Size of the heap memory is limited by the size of the RAM and the swap memory
View Answer
Explanation: None.
8. How to unlimit the size of the stack?
a) setrlimit()
b) unlimit()
c) both setrlimit() & unlimit()
d) none of the mentioned
View Answer
Explanation: None.
9. In Linux, how do the heaps and stacks are managed?
a) ram
b) secondary memory
c) virtual memory
d) none of the mentioned
View Answer
Explanation: In virtual memory, We can keep track of all the objects and access them much faster than any another.
10. Which is used to pass the large objects in c++?
a) pass by value
b) pass by reference
c) both pass by value & reference
d) none of the mentioned
View Answer
Explanation: Because by using pass by reference we need to pass only address location, So it can save a lot of memory.
Sanfoundry Global Education & Learning Series – C++ Programming Language.
Here’s the list of Best Reference Books in C++ Programming Language.
To practice all features of C++ programming language, here is complete set on 1000+ Multiple Choice Questions and Answers on C++.
|
https://www.sanfoundry.com/c-plus-plus-quiz-questions-large-objects-2/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Configuring RBAC For Your Kubernetes Service Accounts
In my previous article, I created a Service Account, and used its token and ca.crt file to access the Kuberentes API.
For various reasons, I wanted to access the Kubernetes API from outside of the cluster, but I didn’t want to (couldn’t…medium.com
Amazon配送商品ならKubernetes: Up and Running; Dive into the Future of Infrastructureが通常配送無料。更にAmazonならポイント還元本が多数。Kelsey…
That code relied on using the legacy authorization system, actually. If you want better authorization system, you really want to use the Role Based Access Control, which was introduced in Kubernetes 1.6.
In this memo, I’m going to record what I did to enable RBAC and allow minimal resources to be accessed by the service account.
Configuring The Environment
First things first: I use Google Container Engine (GKE).
Google Container Engine is a powerful cluster manager and orchestration system for running your Docker containers. Set…cloud.google.com
As of this writing (Sep 2017), GKE’s clusters do not by default enable RBAC. You must explicitly ask for it by 1. Using the
gcloud beta version of the container command, and 2. providing the
--no-enable-legacy-authorization :
$ gcloud beta container clusters create --no-enable-legacy-authorization ...
Also, you will need to give your GCP user account explicit permission to create Roles and RoleBindings (among other things), by giving yourself the
cluster-admin role.
$ ACCOUNT=$(gcloud info --format='value(config.account)')
$ kubectl create clusterrolebinding owner-cluster-admin-binding \
--clusterrole cluster-admin \
--user $ACCOUNT
Without this, creating Roles/ClusterRoles/RoleBindings/ClusterRoleBindings may give you errors.
I think I should note that this information on having to give permission to your account was only found in the CoreOS troubleshooting guide. I may have overlooked something, but seriously, without this information, I was stuck. Thank you so much, CoreOS!
CoreOS provides Container Linux, Tectonic for Kubernetes and the Quay image registry; key components to secure…coreos.com
Roles And Bindings
Now, we’re ready to actually configure authorization for the service account.
In order to configure what resources a Service Account may access (and how), you need two extra things, which are Roles, and RoleBindings.
Roles define the authorization/capability that is to be applied to a Service Account, User, Group, etc. Here’s a role that allows you to access
/api/v1/pods and
/api/v1/namespace/default/pod/$pod-name.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "list"]
RoleBindings associates roles to Service Accounts, Users, Groups, etc. Here’s a role binding that binds
my-service-account with the
pod-reader above.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-reader-binding
namespace: default
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Now your service account should be able to access only the pod related resources, and you should only be able to view them.
ClusterRoles and ClusterRoleBindings
Further adding to the confusion, on top of Roles and RoleBindings, there are ClusterRoles and ClusterRoleBindings.
Roles can only refer to resources in a specific namespace. Namespaces are logical grouping of resources within a Kubernetes cluster. This means that if you want to, for example, access resources such as Nodes which are global to a cluster and are not namespaced, Roles may not be used to grant such authorization. If, for example, you need to allow Prometheus to access the nodes’ statistics, you need to use ClusterRoles.
ClusterRoleBindings and RoleBindings can almost be used interchangeably, but you cannot assign ClusterRole capabilities using RoleBindings. Confusing? You bet!
Here’s my attempt at explaining how it works. Let’s say you create a ClusterRole that allows access to all pods, by replacing Role to ClusterRole, and omitting the “namespace” parameter in the previous example.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "list"]
If you use RoleBinding to bind the service account to this ClusterRole (using the previously example), the service account will only be able to view pods in the “default” namespace.
However, if you change this to a ClusterRoleBinding, the service account will be able to view all pods in the cluster.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-reader-binding
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
That should be all that you need! Now you can create service accounts with minimum authorization to access only what’s needed from your Kubernetes cluster.
Have fun hacking!
|
https://medium.com/@lestrrat/configuring-rbac-for-your-kubernetes-service-accounts-c348b64eb242
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
.
corp1 exchange 2010 will host the entire solution?
all users should be migrated to the corp1 server?
If all users will have @corp.com as primary e-mail address. on phase 1 you should configure all 3 exchange environments (corp1, 2 and 3) with the corp.com domain as non authoritative. The MX records should point to the corp1 server. and you should configure the shared namespace on every exchange environment. In this scenario all mails to corp.com will go into the corp1 server, and if the recipient is not there it will send it to another mail server (corp2 or corp3). This is done for coexistence. Is that what you intend to get here?
Because you have 3 environments you need to test and see if you dont get loops in the mail forwarding process. Also the current corp.com 15-25 users must be set on the corp1.com mail server. it's the best approach as they are very few users and without exchange you wont get coexistence.
Currently each company; aside from the smallest that we will build mailboxes for, is capable of operating from their existing exchange server, and we assumed that for simplicity, we would keep that intact during phase 1.
Ownership wants us to utilize one common domain address starting day 1 (phase 1). We are basically absorbing the domain address from the smallest company; which is not currently operating in an exchange environment.
So, configure each exchange environment with @corp.com domain as non-authoritative, and point the MX records for @corp.com domain to @corp1.com exchange server, correct? Then configure the shared namespace on every exchange environment. That's exactly what i was thinking in terms of mail flow, corp1, then corp2, then corp3. This is exactly what we need initially.
We have couple extra domain addresses we are going to setup for just 1 user at each location, and see how it works. That will be our test. Sound like a good idea?
Thanks so much!
During phase 2, all users will be migrated to the corp1 server, yes.
If your organization has a need to mass-create AD user accounts, watch this video to see how its done without the need for scripting or other unnecessary complexities.
yes you can test with an aditional domain. point the mx records to the corp1.com exchange, and then test the shared namespace configurations for that namespace. dont forget to test it with external and internal e-mails. Mails should flow when sent internally as well. It sounds like a very good idea.
if you need some extra help or have some extra questions let me know.
I've gone through many different mergers like this and wanted to share my thoughts on this.
If the AD domains are going to remain separate during the migration, you will run into autodiscover issues with Exchange 2007/2010 and Outlook 2007/2010. Outlook will automatically search for autodiscover.corp.com due to the primary SMTP address on that user, when in reality his email account resides on corp1.com's exchange server. There are work arounds for that, and if the ultimate goal is to have one shared AD and Exchange environment, this would be OK for the short term. I'm currently working with a client who has two business units that are completely seperate AD domains trying to share the same SMTP namespace long term, and it's not pretty with over 1000 users.
If in the long term the AD structure and companies will remain separate, I would look at using a subdomain
so corp.domain.com, company1.domain.com, company2.domain.com with MX records for each subdomain pointing to the respective servers.
BTW autodiscover problems can cause issues like OOF and the download of the offline address book errors, and also prevent outlook automatica configuration from taking place.
But you are right, autodiscover is tightly integrated to the exchange web services, so calendaring, out of office etc. will break.
The next step is configuring the shared namespace on each mail server, correct? Do we need to wait until we have established VPN connectivity with each other?
No server can have the domain as authoritative. if you have it as authoritative the mail wont be sent to another mail server if the mail address doesnt exists there.
What about the other mail servers then, do they just need to configure the SMTP address and domain on their server? They are using Exchange 2003, so they don't have a receive connector to configure. They just need to allow from my public IP correct?
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
|
https://www.experts-exchange.com/questions/27670018/Planning-Corporate-Merger-of-Exchange.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
symbols
A collection of
Symbol objects for standard component properties and
methods. These let mixins and a component internally communicate without
exposing these properties and methods in the component's public API. They
also help avoid unintentional name collisions, as a component developer must
specifically import the
symbols module and reference one of its symbols.
To use these
Symbol objects in your own component, include this module and
then create a property or method whose key is the desired Symbol. E.g.,
ShadowTemplateMixin expects a component to define
a property called symbols.template:
import ShadowTemplateMixin from 'elix/src/ShadowTemplateMixin.js'; import * as symbols from 'elix/src/symbols.js'; class MyElement extends ShadowTemplateMixin(HTMLElement) { [symbols.template]() { return `Hello, <em>world</em>.`; } }
The above use of
symbols.
While this project generally uses
Symbol objects to hide component
internals, Elix does make some exceptions for methods or properties that are
very helpful to have handy during debugging. E.g.,
ReactiveMixin exposes its setState
method publicly, even though invoking that method from outside a component is
generally bad practice. The mixin exposes
setState because it's very useful
to have access to that in a debug console.
API
canGoLeft property
Symbol for the
canGoLeft property.
A component can implement this property to indicate that the user is currently able to move to the left.
Type:
boolean
canGoRight property
Symbol for the
canGoRight property.
A component can implement this property to indicate that the user is currently able to move to the right.
Type:
boolean
click() method
Symbol for the
click method.
This method is invoked when an element receives an operation that should
be interpreted as a click. ClickSelectionMixin
invokes this when the element receives a
mousedown event, for example.
contentSlot property.
Type:
HTMLSlotElement
defaultFocus constant
Symbol for the
defaultFocus property.
This is used by the defaultFocus utility to determine the default focus target for an element.
elementsWithTransitions constant
Symbol for the
elementsWithTransitions property.
TransitionEffectMixin inspects this property to determine which element(s) have CSS transitions applied to them for visual effects.
getItemText() method.
Returns:
string the text of the item
goDown() method
Symbol for the
goDown method.
This method is invoked when the user wants to go/navigate down.
goEnd() method
Symbol for the
goEnd method.
This method is invoked when the user wants to go/navigate to the end (e.g., of a list).
goLeft() method
Symbol for the
goLeft method.
This method is invoked when the user wants to go/navigate left. Mixins that make use of this method include KeyboardDirectionMixin and SwipeDirectionMixin.
goRight() method
Symbol for the
goRight method.
This method is invoked when the user wants to go/navigate right. Mixins that make use of this method include KeyboardDirectionMixin and SwipeDirectionMixin.
goStart() method
Symbol for the
goStart method.
This method is invoked when the user wants to go/navigate to the start (e.g., of a list).
goUp() method
Symbol for the
goUp method.
This method is invoked when the user wants to go/navigate up.
hasDynamicTemplate constant
Symbol for the
hasDynamicTemplate property.
If your component class does not always use the same template, define a
static class property getter with this symbol and have it return
true.
This will disable template caching for your component.
keydown() method
Symbol for the
keydown method.
This method is invoked when an element receives a
keydown event.
An implementation of
symbols
symbols.keydown is that the last mixin
applied wins. That is, if an implementation of
symbols.keydown did
handle the event, it can return immediately. If it did not, it should
invoke
super to let implementations further up the prototype chain have
their chance.
This method takes a
KeyboardEvent parameter that contains the event being
processed.
mouseenter() method.
mouseleave() method.
raiseChangeEvents property[symbols.raiseChangeEvents] = true; // Do work here, possibly setting properties, like: this.foo = 'Hello'; this[symbols.raiseChangeEvents] = false; });
Elsewhere, property setters that raise change events should only do so it
this property is
true:
set foo(value) { // Save foo value here, do any other work. if (this[symbols.
Type:
boolean
render() method
Symbol for an internal
render method.
ReactiveMixin has a public render
method that can be invoked to force the component to render. That public
method internally invokes an
symbols.render method, which a component can
implement to actually render itself.
You can implement a
symbols.render method if necessary, but the most
common way for Elix components to render themselves is to use
RenderUpdatesMixin,
ShadowTemplateMixin, and/or
ContentItemsMixin, all of which provide a
symbols.render method.
rendering property
Symbol for the
rendering property.
ReactiveMixin sets this property to true during rendering, at other times it will be false.
Type:
boolean
rightToLeft property
Symbol for the
rightToLeft property.
LanguageDirectionMixin sets this to true if the
if the element is rendered right-to-left (the element has or inherits a
dir attribute with the value
rtl).
This property wraps the internal state member
state.languageDirection,
and is true if that member equals the string "rtl".
Type:
boolean
scrollTarget property
Symbol for the
scrollTarget property.
This property indicates which element in a component's shadow subtree should be scrolled. SelectionInViewMixin can use this property to determine which element should be scrolled to keep the selected item in view.
Type:
Element
startEffect() method
Symbol for the
startEffect method.
A component using TransitionEffectMixin can invoke this method to trigger the application of a named, asynchronous CSS transition effect.
This method takes a single
string parameter giving the name of the effect
to start.
swipeLeft() method
Symbol for the
swipeLeft method.
The swipe mixins TouchSwipeMixin and TrackpadSwipeMixin invoke this method when the user finishes a gesture to swipe left.
swipeRight() method
Symbol for the
swipeLeft method.
The swipe mixins TouchSwipeMixin and TrackpadSwipeMixin invoke this method when the user finishes a gesture to swipe left.
swipeTarget property.
Type:
HTMLElement
template constant
Symbol for the
template method.
ShadowTemplateMixin uses this property to obtain a component's template, which it will clone into a component's shadow root.
|
https://component.kitchen/elix/symbols
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.