研究DBMS MS SQL Server Developer 2016和PostgreSQL 10.5 for 1C的性能

测试“ 1C会计”的目标和要求


测试的主要目的是比较1C系统在其他相同条件下在两个不同DBMS上的行为。 即 每次测试期间1C数据库的配置和初始数据填充应相同。

测试期间应获取的主要参数:

  • 每个测试的执行时间(由开发部门1C删除)
  • 测试期间,DBMS管理员将删除测试期间DBMS和服务器环境上的负载,系统管理员也将删除服务器环境上的负载

1C系统的测试应在考虑客户端-服务器体系结构的情况下进行,因此,有必要在系统中模拟一个或多个用户,并在接口中计算出信息输入并将该信息存储在数据库中。 同时,有必要在很长的一段时间内发布大量的周期性信息,以在累加寄存器中创建总计。

为了执行测试,针对1C Accounting 3.0的配置,以脚本形式开发了一种用于脚本测试的算法,其中将测试数据串行输入到1C系统中。 该脚本允许您为执行的操作和测试数据量指定各种设置。 详细说明如下。

描述测试环境的设置和特征


富通集团决定对结果进行仔细检查,包括使用著名的吉列夫(Gilev)测试

还鼓励我们进行测试,包括一些有关从MS SQL Server过渡到PostgreSQL期间性能变化结果的出版物。 例如: 1C之战:PostgreSQL 9.10 vs MS SQL 2016

因此,这是测试的基础架构:
1CMS SQLPostgreSQL的
CPU核数888
RAM数量163232
操作系统MS Windows Server 2012R2标准MS Windows Server 2012R2标准CentOS 7.6.1810
可容纳人数x64x64x64
1C平台8.3.13.1865----
DBMS版本--13.0.5264.110.5(4.8.5.20150623)

用于MS SQL和PostgreSQL的服务器是虚拟的,可以交替运行以进行所需的测试。 1C站在单独的服务器上。

详细资料
系统管理程序规范:
型号:Supermicro SYS-6028R-TRT
CPU:英特尔®至强®CPU E5-2630 v3 @ 2.40GHz(2座* 16 CPU HT = 32CPU)
内存:212 GB
操作系统:VMWare ESXi 6.5
PowerProfile:性能

系统管理程序磁盘子系统:
控制器:Adaptec 6805,缓存大小:512MB
容量:RAID 10,5.7 TB
条带大小:1024 KB
写缓存:开启
读取缓存:关闭
车轮:6个。 HGST HUS726T6TAL,
扇区大小:512字节
写缓存:打开

PostgreSQL的配置如下:

  1. postgresql.conf:
    基本设置是使用计算器-pgconfigurator.cybertec.at进行的 ,根据从出版物结尾处提到的来源收到的信息,更改了参数huge_pages,checkpoint_timeout,max_wal_size,min_wal_size,random_page_cost。 根据1C主动使用临时表的建议,temp_buffers参数的值增加了:

    listen_addresses = '*' max_connections = 1000 #     .          .    32     25%    . shared_buffers = 9GB #   (  Linux - vm.nr_hugepages). huge_pages = on #      . temp_buffers = 256MB #      ORDER BY, DISTINCT, merge joins, join, hash-based aggregation, hash-based processing of IN subqueries. #  ,  1     ( "Mostly complicated real-time SQL queries"  ).     64MB. work_mem = 128MB #    . VACUUM,  , etc. maintenance_work_mem = 512MB #    (vm.dirty_background_bytes, vm.dirty_bytes),        IO   CHECKPOINT. checkpoint_timeout = 30min max_wal_size = 3GB min_wal_size = 512MB checkpoint_completion_target = 0.9 seq_page_cost = 1 #   .  - - 4.  RAID10  . random_page_cost = 2.5 #       postgres ,    PageCache. effective_cache_size = 22GB 

  2. 内核,操作系统参数:

    设置以已调整守护程序的配置文件格式设置:

     [sysctl] #     (PageCache),       /     . #-    (10,30)               /. #    CHECKPOINT     I/O. #       RAID-  write-back cache  512MB. vm.dirty_background_bytes = 67108864 vm.dirty_bytes = 536870912 # SWAP -.    ,    OOM. vm.swappiness = 1 # ,        CPU. #         CPU  . #    . kernel.sched_migration_cost_ns = 5000000 #    CPU   . #       0.    . kernel.sched_autogroup_enabled = 0 #    .     . #     - https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-HUGE-PAGES vm.nr_hugepages = 5000 [vm] #   .         ,    .  ,     . transparent_hugepages=never #  CPU.       ,      . [cpu] force_latency=1 governor=performance energy_perf_bias=performance min_perf_pct=100 

  3. 文件系统:

     # : #stride  stripe_width    RAID 10  6-    stripe  1024kb mkfs.ext4 -E stride=256,stripe_width=768 /dev/sdb # : /dev/sdb /var/lib/pgsql ext4 noatime,nodiratime,data=ordered,barrier=0,errors=remount-ro 0 2 #noatime,nodiratime -         #data=ordered -     .     #barrier=0 -       .  RAID-     . 

postgresql.conf文件的所有内容:
 # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are introduced with # "#" anywhere on a line. The complete list of parameter names and allowed # values can be found in the PostgreSQL documentation. # # The commented-out settings shown in this file represent the default values. # Re-commenting a setting is NOT sufficient to revert it to the default value; # you need to reload the server. # # This file is read on server startup and when the server receives a SIGHUP # signal. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, run "pg_ctl reload", or execute # "SELECT pg_reload_conf()". Some parameters, which are marked below, # require a server shutdown and restart to take effect. # # Any parameter can also be given as a command-line option to the server, eg, # "postgres -c log_connections=on". Some parameters can be changed at run time # with the "SET" SQL command. # # Memory units: kB = kilobytes Time units: ms = milliseconds # MB = megabytes s = seconds # GB = gigabytes min = minutes # TB = terabytes h = hours # d = days #------------------------------------------------------------------------------ # FILE LOCATIONS #------------------------------------------------------------------------------ # The default values of these variables are driven from the -D command-line # option or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '' # write an extra PID file # (change requires restart) #------------------------------------------------------------------------------ # CONNECTIONS AND AUTHENTICATION #------------------------------------------------------------------------------ # - Connection Settings - listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost'; use '*' for all # (change requires restart) #port = 5432 # (change requires restart) max_connections = 1000 # (change requires restart) #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directories = '/var/run/postgresql, /tmp' # comma-separated list of directories # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # begin with 0 to use octal notation # (change requires restart) #bonjour = off # advertise server via Bonjour # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security and Authentication - #authentication_timeout = 1min # 1s-600s ssl = off #ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers #ssl_prefer_server_ciphers = on #ssl_ecdh_curve = 'prime256v1' #ssl_dh_params_file = '' #ssl_cert_file = 'server.crt' #ssl_key_file = 'server.key' #ssl_ca_file = '' #ssl_crl_file = '' #test #password_encryption = md5 # md5 or scram-sha-256 #db_user_namespace = off row_security = off # GSSAPI using Kerberos #krb_server_keyfile = '' #krb_caseins_users = off # - TCP Keepalives - # see "man 7 tcp" for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #------------------------------------------------------------------------------ # RESOURCE USAGE (except WAL) #------------------------------------------------------------------------------ # - Memory - shared_buffers = 9GB # min 128kB # (change requires restart) huge_pages = on # on, off, or try # (change requires restart) temp_buffers = 256MB # min 800kB #max_prepared_transactions = 0 # zero disables the feature # (change requires restart) # Caution: it is not advisable to set max_prepared_transactions nonzero unless # you actively intend to use prepared transactions. # work_mem = 128MB # min 64kB maintenance_work_mem = 512MB # min 1MB #replacement_sort_tuples = 150000 # limits use of replacement selection sort #autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem #max_stack_depth = 2MB # min 100kB dynamic_shared_memory_type = posix # the default is the first option # supported by the operating system: # posix # sysv # windows # mmap # use none to disable dynamic shared memory # (change requires restart) # - Disk - #temp_file_limit = -1 # limits per-process temp file space # in kB, or -1 for no limit # - Kernel Resource Usage - max_files_per_process = 10000 # min 25 # (change requires restart) shared_preload_libraries = 'online_analyze, plantuner' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-100 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 1-10000 credits # - Background Writer - bgwriter_delay = 20ms # 10-10000ms between rounds bgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round bgwriter_lru_multiplier = 4.0 # 0-10.0 multiplier on buffers scanned/round bgwriter_flush_after = 0 # measured in pages, 0 disables # - Asynchronous Behavior - effective_io_concurrency = 3 # 1-1000; 0 disables prefetching max_worker_processes = 8 # (change requires restart) max_parallel_workers_per_gather = 4 # taken from max_parallel_workers max_parallel_workers = 8 # maximum number of max_worker_processes that # can be used in parallel queries #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate # (change requires restart) #backend_flush_after = 0 # measured in pages, 0 disables #------------------------------------------------------------------------------ # WRITE AHEAD LOG #------------------------------------------------------------------------------ # - Settings - wal_level = minimal # minimal, replica, or logical # (change requires restart) #fsync = on # flush data to disk for crash safety # (turning this off can cause # unrecoverable data corruption) #synchronous_commit = on # synchronization level; # off, local, remote_write, remote_apply, or on wal_sync_method = fdatasync # the default is the first option # supported by the operating system: # open_datasync # fdatasync (default on Linux) # fsync # fsync_writethrough # open_sync #wal_sync_method = open_datasync #full_page_writes = on # recover from partial page writes wal_compression = on # enable compression of full-page writes #wal_log_hints = off # also do full page writes of non-critical updates # (change requires restart) wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers # (change requires restart) wal_writer_delay = 200ms # 1-10000 milliseconds wal_writer_flush_after = 1MB # measured in pages, 0 disables commit_delay = 1000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_timeout = 30min # range 30s-1d max_wal_size = 3GB min_wal_size = 512MB checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 #checkpoint_flush_after = 256kB # measured in pages, 0 disables #checkpoint_warning = 30s # 0 disables # - Archiving - #archive_mode = off # enables archiving; off, on, or always # (change requires restart) #archive_command = '' # command to use to archive a logfile segment # placeholders: %p = path of file to archive # %f = file name only # eg 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f' #archive_timeout = 0 # force a logfile segment switch after this # number of seconds; 0 disables #------------------------------------------------------------------------------ # REPLICATION #------------------------------------------------------------------------------ # - Sending Server(s) - # Set these on the master and on any standby that will send replication data. max_wal_senders = 0 # max number of walsender processes # (change requires restart) #wal_keep_segments = 130 # in logfile segments, 16MB each; 0 disables #wal_sender_timeout = 60s # in milliseconds; 0 disables #max_replication_slots = 10 # max number of replication slots # (change requires restart) #track_commit_timestamp = off # collect timestamp of transaction commit # (change requires restart) # - Master Server - # These settings are ignored on a standby server. #synchronous_standby_names = '' # standby servers that provide sync rep # method to choose sync standbys, number of sync standbys, # and comma-separated list of application_name # from standby(s); '*' = all #vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed # - Standby Servers - # These settings are ignored on a master server. #hot_standby = on # "off" disallows queries during recovery # (change requires restart) #max_standby_archive_delay = 30s # max delay before canceling queries # when reading WAL from archive; # -1 allows indefinite delay #max_standby_streaming_delay = 30s # max delay before canceling queries # when reading streaming WAL; # -1 allows indefinite delay #wal_receiver_status_interval = 10s # send replies at least this often # 0 disables #hot_standby_feedback = off # send info from standby to prevent # query conflicts #wal_receiver_timeout = 60s # time that receiver waits for # communication from master # in milliseconds; 0 disables #wal_retrieve_retry_interval = 5s # time to wait before retrying to # retrieve WAL after a failed attempt # - Subscribers - # These settings are ignored on a publisher. #max_logical_replication_workers = 4 # taken from max_worker_processes # (change requires restart) #max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers #------------------------------------------------------------------------------ # QUERY TUNING #------------------------------------------------------------------------------ # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_indexonlyscan = on #enable_material = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - seq_page_cost = 1 # measured on an arbitrary scale random_page_cost = 2.5 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #parallel_tuple_cost = 0.1 # same scale as above #parallel_setup_cost = 1000.0 # same scale as above #min_parallel_table_scan_size = 8MB #min_parallel_index_scan_size = 512kB effective_cache_size = 22GB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 #geqo_seed = 0.0 # range 0.0-1.0 # - Other Planner Options - #default_statistics_target = 100 # range 1-10000 #constraint_exclusion = partition # on, off, or partition #cursor_tuple_fraction = 0.1 # range 0.0-1.0 from_collapse_limit = 20 join_collapse_limit = 20 # 1 disables collapsing of explicit # JOIN clauses #force_parallel_mode = off #------------------------------------------------------------------------------ # ERROR REPORTING AND LOGGING #------------------------------------------------------------------------------ # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, csvlog, syslog, and eventlog, # depending on platform. csvlog # requires logging_collector to be on. # This is used when logging to stderr: logging_collector = on # Enable capturing of stderr and csvlog # into log files. Required to be on for # csvlogs. # (change requires restart) # These are only used if logging_collector is on: log_directory = 'pg_log' # directory where log files are written, # can be absolute or relative to PGDATA log_filename = 'postgresql-%a.log' # log file name pattern, # can include strftime() escapes #log_file_mode = 0600 # creation mode for log files, # begin with 0 to use octal notation log_truncate_on_rotation = on # If on, an existing log file with the # same name as the new log file will be # truncated rather than appended to. # But such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 disables. log_rotation_size = 0 # Automatic rotation of logfiles will # happen after that much log output. # 0 disables. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' #syslog_sequence_numbers = on #syslog_split_messages = on # This is only relevant when logging to eventlog (win32): # (change requires restart) #event_source = 'PostgreSQL' # - When to Log - #client_min_messages = notice # values in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = warning # values in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_min_error_statement = error # values in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic (effectively off) #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements # and their durations, > 0 logs only # statements running at least this number # of milliseconds # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = on log_checkpoints = on log_connections = on log_disconnections = on log_duration = on #log_error_verbosity = default # terse, default, or verbose messages #log_hostname = off log_line_prefix = '< %m >' # special values: # %a = application name # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = process ID # %t = timestamp without milliseconds # %m = timestamp with milliseconds # %n = timestamp with milliseconds (as a Unix epoch) # %i = command tag # %e = SQL state # %c = session ID # %l = session line number # %s = session start timestamp # %v = virtual transaction ID # %x = transaction ID (0 if none) # %q = stop here in non-session # processes # %% = '%' # eg '<%u%%%d> ' log_lock_waits = on # log lock waits >= deadlock_timeout log_statement = 'all' # none, ddl, mod, all #log_replication_commands = off log_temp_files = 0 # log temporary files equal or larger # than the specified size in kilobytes; # -1 disables, 0 logs all temp files log_timezone = 'W-SU' # - Process Title - #cluster_name = '' # added to process titles if nonempty # (change requires restart) #update_process_title = on #------------------------------------------------------------------------------ # RUNTIME STATISTICS #------------------------------------------------------------------------------ # - Query/Index Statistics Collector - #track_activities = on #track_counts = on #track_io_timing = on #track_functions = none # none, pl, all #track_activity_query_size = 1024 # (change requires restart) #stats_temp_directory = 'pg_stat_tmp' # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #------------------------------------------------------------------------------ # AUTOVACUUM PARAMETERS #------------------------------------------------------------------------------ autovacuum = on # Enable autovacuum subprocess? 'on' # requires track_counts to also be on. log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and # their durations, > 0 logs only # actions running at least this number # of milliseconds. autovacuum_max_workers = 4 # max number of autovacuum subprocesses # (change requires restart) #autovacuum_naptime = 20s # time between autovacuum runs #autovacuum_vacuum_threshold = 50 # min number of row updates before # vacuum #autovacuum_analyze_threshold = 50 # min number of row updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age # before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for # autovacuum, in milliseconds; # -1 means use vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #------------------------------------------------------------------------------ # CLIENT CONNECTION DEFAULTS #------------------------------------------------------------------------------ # - Statement Behavior - #search_path = '"$user", public' # schema names #default_tablespace = '' # a tablespace name, '' uses the default #temp_tablespaces = '' # a list of tablespace names, '' uses # only default tablespace #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #default_transaction_deferrable = off #session_replication_role = 'origin' #statement_timeout = 0 # in milliseconds, 0 is disabled #lock_timeout = 0 # in milliseconds, 0 is disabled #idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled #vacuum_freeze_min_age = 50000000 #vacuum_freeze_table_age = 150000000 #vacuum_multixact_freeze_min_age = 5000000 #vacuum_multixact_freeze_table_age = 150000000 #bytea_output = 'hex' # hex, escape #xmlbinary = 'base64' #xmloption = 'content' #gin_fuzzy_search_limit = 0 #gin_pending_list_limit = 4MB # - Locale and Formatting - datestyle = 'iso, dmy' #intervalstyle = 'postgres' timezone = 'W-SU' #timezone_abbreviations = 'Default' # Select the set of available time zone # abbreviations. Currently, there are # Default # Australia (historical usage) # India # You can create your own file in # share/timezonesets/. #extra_float_digits = 0 # min -15, max 3 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb, but they can be changed. lc_messages = 'ru_RU.UTF-8' # locale for system error message # strings lc_monetary = 'ru_RU.UTF-8' # locale for monetary formatting lc_numeric = 'ru_RU.UTF-8' # locale for number formatting lc_time = 'ru_RU.UTF-8' # locale for time formatting # default configuration for text search default_text_search_config = 'pg_catalog.russian' # - Other Defaults - #dynamic_library_path = '$libdir' #local_preload_libraries = '' #session_preload_libraries = '' #------------------------------------------------------------------------------ # LOCK MANAGEMENT #------------------------------------------------------------------------------ #deadlock_timeout = 1s max_locks_per_transaction = 256 # min 10 # (change requires restart) #max_pred_locks_per_transaction = 64 # min 10 # (change requires restart) #max_pred_locks_per_relation = -2 # negative values mean # (max_pred_locks_per_transaction # / -max_pred_locks_per_relation) - 1 #max_pred_locks_per_page = 2 # min 0 #------------------------------------------------------------------------------ # VERSION/PLATFORM COMPATIBILITY #------------------------------------------------------------------------------ # - Previous PostgreSQL Versions - #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off escape_string_warning = off #lo_compat_privileges = off #operator_precedence_warning = off #quote_all_identifiers = off standard_conforming_strings = off #synchronize_seqscans = on # - Other Platforms and Clients - #transform_null_equals = off #------------------------------------------------------------------------------ # ERROR HANDLING #------------------------------------------------------------------------------ #exit_on_error = off # terminate session on any error? #restart_after_crash = on # reinitialize after backend crash? #------------------------------------------------------------------------------ # CONFIG FILE INCLUDES #------------------------------------------------------------------------------ # These options allow settings to be loaded from files other than the # default postgresql.conf. #include_dir = 'conf.d' # include files ending in '.conf' from # directory 'conf.d' #include_if_exists = 'exists.conf' # include file only if it exists #include = 'special.conf' # include file #------------------------------------------------------------------------------ # CUSTOMIZED OPTIONS #------------------------------------------------------------------------------ online_analyze.threshold = 50 online_analyze.scale_factor = 0.1 online_analyze.enable = on online_analyze.verbose = off online_analyze.local_tracking = on online_analyze.min_interval = 10000 online_analyze.table_type = 'temporary' online_analyze.verbose='off' plantuner.fix_empty_table='on' 

MS SQL的配置如下:







1C集群的设置为标准设置:







服务器上没有防病毒程序,并且未安装任何第三方。

对于MS SQL,tempdb已移至单独的逻辑驱动器。 但是,数据库的数据文件和事务日志文件位于同一逻辑驱动器上(也就是说,数据文件和事务日志未拆分为单独的逻辑驱动器)。

在所有逻辑驱动器上都禁用了MS SQL Server所在的Windows中的索引驱动器(这在prodovskih环境中通常是大多数情况下的惯例)。

自动测试脚本的主要算法的描述
主要的估计测试周期为1年,在此期间,将根据指定的参数每天创建文档和参考信息。

在执行的每一天,都会启动信息输入和输出块:

  1. 第1块“_”-“收到商品和服务”
    • 对方目录打开
    • 使用“供应商”视图创建目录“承包商”的新元素
    • 使用“与供应商”视图为新的对等方创建“合同”目录中的新元素
    • 目录“命名”打开
    • 创建目录“ Nomenclature”的一组元素,类型为“ Product”
    • 目录“命名”的一组元素,格式为“服务”
    • 文件清单“货物和服务的收据”
    • 创建一个新文档“商品和服务收据”,其中表格部分“商品”和“服务”中填充了创建的数据集
    • 当月生成报告“ Account Card 41”(如果指示了额外的结账间隔)

  2. 第2块“_”-“商品和服务的销售”

    • 对方目录打开
    • 使用“买方”视图创建“ Counterparties”目录中的新元素
    • 使用新交易对手的“与买方”视图创建“合同”目录中的新元素
    • 将打开“商品和服务的销售”文件清单。
    • 创建一个新文档“商品和服务销售”,其中根据先前创建的数据中指定的参数填写表格部分“商品”和“服务”
    • 当月生成报告“ Account Card 41”(如果指示了额外的结账间隔)
  3. 生成当月的报告“ Account Card 41”

在每个月的创建文档的月底,执行输入和输出信息的块:

  1. 从年初到月末生成报告“ Account Card 41”
  2. 从年初到月末生成“营业额资产负债表”报告
  3. 正在执行“每月结算”监管程序。

执行结果以小时,分钟,秒和毫秒为单位提供有关测试时间的信息。

测试脚本的主要功能:

  1. 禁用/启用单个单元的能力
  2. 能够指定每个块的文档总数
  3. 能够指定每天每个块的文档数
  4. 能够在文件中指示商品和服务的数量
  5. 能够设置定量和价格指标列表以进行记录。 用于在文档中创建不同的值集

每个数据库的基本测试计划:

  1. “第一次测试。” 在单个用户下,将创建少量具有简单表的文档,并形成“月关闭”
    • 预计交货时间为20分钟。填充1个月。数据:“职业学校”有50个文件,“ RTU”有50个文件,“供应商”有100个要素,“供应商” +“协议”有50个要素,“买家” +“协议”有50个要素,“月末结算”有2个业务。文件中有1个产品和1个服务

  2. “第二次测试。” 在一个用户下创建大量文档并填写表格,形成月结

    • — 50-60 . 3 . : 90 «», 90 «», 540 «», 90 «» + «», 90 «» + «», 3 « ». 3 3

  3. « ». . .
    • — 40-60 . 2 . : 50 «», 50 «», 300 «», 50 «» + «», 50 «» + «». 3 3


:

  1. , :

    • « »
  2. « » « »
  3. 1 "*.dt"
  4. « »


结果


现在,在MS SQL Server DBMS上最有趣的结果是:

详细资料
:



:



:



PostgreSQL, , , , :

:



:



:



吉列夫测试:
指示符MS SQLPostgreSQL的PostgreSQL DBMS与MS SQL DBMS的百分比差异(改进)
吉列夫综合测试(平均)14.4112.55-14.82
最高 速度1个流(平均)32 404.67 KB / s33 472.67 KB / s+3.3
最高 速度(平均)51744 KB /秒86 323.67 KB / s+66.83
推荐用户(平均)4270+66.67

从结果可以看出,在一般的综合测试DBMS中,PostgreSQL平均损失了MS SQL DBMS的性能14.82%但是,根据最后两个指标,PostgreSQL的结果要比MS SQL好得多。

1C会计的专业测试:
测试说明MS SQL,秒PostgreSQL,秒PostgreSQL DBMS与MS SQL DBMS的百分比差异(改进)
脚本-“首次测试”1056.451064-0.7
脚本-“第二项测试”3230.83236,6-0,2
— « »1707,451738,8-1,8
— « » (4 )1859,11864,9-0,3
3022+26,7
01.01.2018 31.12.2018138,5164,5-15,8
316397-20,4
*.dt87870
*.dt201207-2,9
« 2018 .7864,5+17,3

从结果可以看出,使用上述设置1C会计在MS SQL和PostgreSQL上的工作原理大致相同

在这两种情况下,DBMS都能稳定运行。

当然,您可能需要从DBMS以及从OS和文件系统进行更细微的调整。一切都随着出版物的播出而完成,这表示从MS SQL切换到PostgreSQL时,生产率将显着提高或几乎相同。此外,在此测试中,采取了多种措施来优化CentOS本身的OS和文件系统,如上所述。

值得注意的是,Gilev测试针对PostgreSQL运行了很多次-给出了最佳结果。Gilev测试在MS SQL上运行了3次,因此他们没有在MS SQL上进行优化。随后的所有尝试都是使大象达到MS SQL指标。

在MS SQL和PostgreSQL之间的Gilev综合测试中达到最佳差异之后,如上所述,针对1C记帐进行了专门的测试。

总的结论是,尽管在PostgreSQL DBMS的Gilev综合测试中相对于MS SQL的性能显着下降,但通过上面给出的适当设置,仍可以在MS SQL DBMS和PostgreSQL DBMS上安装1C Accounting

备注


应当立即指出,进行此分析只是为了比较不同DBMS中的1C性能。

该分析和结论仅对在上述条件和软件版本下的1C会计正确。根据获得的分析,不可能确切地推断出其他设置和软件版本以及不同的1C配置会发生什么。

但是,Gilev测试结果表明,在1C版本8.3及更高版本的所有配置上,如果设置正确,相对于MS SQL DBMS,PostgreSQL DBMS的最大性能下降可能不会超过15%。还值得考虑的是,任何用于精确比较的详细测试都将花费大量时间和资源。基于此,我们可以做出一个更可能的假设:1C版本8.3和更高版本可以从MS SQL迁移到PostgreSQL,最大性能损失高达15%。过渡没有任何客观障碍,可能不会出现这15%的情况,而且如果出现这些障碍,仅在必要时购买功能更强大的设备就足够了。

还需要注意的是,被测数据库很小,也就是说,数据大小明显小于100 GB,同时运行的最大线程数为4。这意味着对于大型数据库,其大小明显大于100 GB(例如,大约1 TB) ,以及具有密集访问权限的数据库(数十个同时存在数百个活动流),这些结果可能不正确。

为了进行更客观的分析,将来比较在同一CentOS操作系统上安装的已发布MS SQL Server 2019 Developer和PostgreSQL 12以及在最新版本的Windows Server操作系统上安装MS SQL时将很有用。现在没有人将PostgreSQL放到Windows上,因此PostgreSQL DBMS的性能下降将非常重要。

当然,吉列夫(Gilev)测试通常只涉及性能,而不仅仅是1C。但是,现在说MS SQL DBMS总是比PostgreSQL DBMS好得多,这还为时过早,因为事实不多。要确认或反驳此声明,您需要进行许多其他测试。例如,对于.NET,您需要编写原子操作和复杂测试,在不同的条件下重复运行它们,确定执行时间并取平均值。然后比较这些值。这将是一项客观分析。

目前,我们还不准备进行这种分析,但是将来很有可能进行分析。然后,我们将在PostgreSQL比MS SQL更好的操作以及百分比方面的优势,以及在MS SQL比PostgreSQL更好的方面以及百分比方面的优势下,进行详细介绍。

此外,我们的测试未将MS SQL的优化方法应用于此处,此处进行了介绍。也许本文只是忘记了关闭Windows磁盘索引。

在比较两个DBMS时,应牢记一个重要的观点:PostgreSQL DBMS是免费和开放的,而MS SQL DBMS是付费的并且具有封闭的源代码。

现在以牺牲Gilev测试本身为代价。在测试之外,将删除综合测试(第一个测试)和所有其他测试的迹线。第一个测试主要查询原子操作(插入,更新,删除和读取)和复杂操作(参考多个表,以及在数据库中创建,更改和删除表),并使用不同数量的处理数据。因此,对于比较两个环境(包括DBMS)相对于彼此的平均统一性能,可以认为Gilev综合测试相当客观。绝对值本身什么也没说,但是它们在两种不同媒体中的比率是很客观的。

以其他Gilev测试为代价。跟踪显示最大线程数为7,但有关用户数的结论超过50。此外,根据要求,还不清楚如何计算其他指标。因此,其余测试不是客观的,并且变化很大且近似。只有专门的测试不仅考虑系统本身的特点,而且还要考虑用户本身的工作,才能给出更准确的值。

致谢


  • 执行1C设置并启动了Gilev测试,还为创建此出版物做出了重大贡献:
    • Roman Buts-团队主管1C
    • Alexander Gryaznov-1C程序员
  • 富通公司的同事们为CentOS,PostgreSQL等的优化优化做出了重要贡献,但希望保持隐身状态

特别感谢uaggsterBP1988,他们对MS SQL和Windows提供了一些建议。

后记


这篇文章也做了一个奇怪的分析

您得到了什么结果以及如何测试?

资料来源


Source: https://habr.com/ru/post/zh-CN457602/


All Articles